Skip to content

feat(providers): add GitHub Copilot community provider (@github/copilot-sdk)#1

Open
popemkt wants to merge 92 commits intodevfrom
emdash/add-copilot-2er
Open

feat(providers): add GitHub Copilot community provider (@github/copilot-sdk)#1
popemkt wants to merge 92 commits intodevfrom
emdash/add-copilot-2er

Conversation

@popemkt
Copy link
Copy Markdown
Owner

@popemkt popemkt commented Apr 21, 2026

Summary

  • Problem: GitHub Copilot shipped a first-party SDK (@github/copilot-sdk) in 2025 that drives the Copilot CLI through a supported JSON-RPC bridge. Archon had no way to use it — the only path to Copilot would have been screen-scraping the interactive TUI.
  • Why it matters: Copilot CLI gives users access to gpt-5, gpt-5-mini, and cross-family models (Claude via Copilot) under an existing GitHub billing relationship. This is the second community provider after Pi (feat(providers): add Pi community provider (@mariozechner/pi-coding-agent) coleam00/Archon#1270) and validates that the community-provider seam scales.
  • What changed: New packages/providers/src/community/copilot/ wrapping @github/copilot-sdk, registered via registerCopilotProvider() with builtIn: false. Seven phased commits: initial scaffold → tool restrictions → MCP → skills → structured output → hardening fixes from CodeRabbit/Devin → sub-agents. Nine of the thirteen capability flags are wired end-to-end.
  • What did NOT change (scope boundary): No hooks, no fallback model, no sandbox, no promotion to builtIn: true. Those remain deliberate follow-ups (plan preserved at .claude/archon/plans/copilot-provider-parity.md). Claude and Codex code paths untouched. Pi provider's behavior is preserved byte-for-byte via re-exports.

Architecture Diagram

Before

  @archon/workflows
        │
        │ IAgentProvider
        ▼
  @archon/providers ──┬── claude/                  ─── @anthropic-ai/claude-agent-sdk
  (registry)          ├── codex/                   ─── @openai/codex-sdk
                      └── community/pi/            === @mariozechner/pi-coding-agent

  registerCommunityProviders() — pi (builtIn: false)

After

  @archon/workflows
        │
        │ IAgentProvider  (unchanged contract)
        ▼
  @archon/providers ──┬── claude/                  ─── @anthropic-ai/claude-agent-sdk
  (registry)          ├── codex/                   ─── @openai/codex-sdk
                      ├── community/pi/            === @mariozechner/pi-coding-agent
                      ├── [+] community/copilot/   === @github/copilot-sdk
                      └── [+] shared/              ─── skills.ts, structured-output.ts
                                                       (extracted from pi/, pi re-exports)

  registerCommunityProviders() — pi + [+] copilot (both builtIn: false)

Shared extractions (no behavior change to Pi):

Extracted to Source Back-compat
shared/skills.ts::resolveSkillDirectories pi/options-translator.ts::resolvePiSkills Pi re-exports under the original name; all 14 existing Pi skill tests pass unchanged
shared/structured-output.ts::augmentPromptForJsonSchema pi/provider.ts Pi re-exports; no Pi test changes
shared/structured-output.ts::tryParseStructuredOutput pi/event-bridge.ts Pi re-exports; all 9 existing Pi JSON-parse tests pass unchanged

MCP reuse (no extraction): Copilot imports Claude's existing public loadMcpConfig re-export (packages/providers/src/claude/index.ts:4) directly. Env-var expansion and missing-var detection behave identically across providers.

Capability matrix

Field Copilot Notes
sessionResume Returns sessionId, reused on resume
envInjection Codebase env vars merged into spawned CLI environment
effortControl `effort: low
thinkingControl String form of thinking: accepted; object form warns (Claude-specific)
toolRestrictions allowed_toolsavailableTools, denied_toolsexcludedTools
mcp nodeConfig.mcp JSON → SessionConfig.mcpServers, $VAR expanded
skills skills: [name] → absolute dirs containing SKILL.md, missing names warn
structuredOutput Best-effort prompt-engineering (same pattern as Pi coleam00#1297)
agents nodeConfig.agentscustomAgents; Claude-specific fields (model, disallowedTools, skills, maxTurns) warn per agent
hooks Copilot's SessionHooks event vocabulary diverges from Archon's Claude-shaped hook schema; deferred to avoid partial mapping
fallbackModel No native SDK field; one-shot retry would be adapter-level — deferred
costControl Not wired (no maxBudgetUsd enforcement)
sandbox Copilot's onPermissionRequest is policy control, not sandbox semantics; intentionally not declared equivalent

Implementation (7 commits)

  1. 9546ea75 initial providerCopilotProvider + parseCopilotConfig + config-aware resolveCopilotCliPath (env > config > vendor in binary mode > PATH > dev-undefined > throw). Auth: copilot login, COPILOT_GITHUB_TOKEN, GH_TOKEN, GITHUB_TOKEN. Registration wired at registerCommunityProviders(). SAFE_ASSISTANT_FIELDS gains copilot: ['model'].

  2. f412b835 refactor + tool restrictions — Extracted single-source-of-truth buildSessionConfig() with a ProviderWarning[] collector, so each subsequent phase plugs in as one applyX(sessionConfig, …, warnings) call. applyToolRestrictions passes allowed_tools / denied_tools through verbatim (SDK tool names are canonical).

  3. 94b7f472 MCPapplyMcpServers reuses Claude's public loadMcpConfig. Missing env vars surface as a consolidated system-warning chunk; IO / JSON errors propagate. enableConfigDiscovery stays false by default (see trust-boundary note in docs).

  4. a19829064 skills — Extracted Pi's resolvePiSkillsshared/skills.ts::resolveSkillDirectories. Pi re-exports under the historical name so every existing Pi test passes unchanged. Copilot's applySkills maps names → absolute paths → SessionConfig.skillDirectories; unresolved names warn.

  5. d7719bba structured output (best-effort) — Extracted Pi's augmentPromptForJsonSchema + tryParseStructuredOutputshared/structured-output.ts. Same prompt-engineering + JSON-parse approach as Pi feat(providers/pi): best-effort structured output via prompt engineering coleam00/Archon#1297. Parse failure leaves structuredOutput unset so the executor's existing dag.structured_output_missing path handles degradation.

  6. 8a3504d7 PR-review hardening — Addresses CodeRabbit + Devin findings:

    • Abort-guard: checks abortSignal.aborted before awaiting sendAndWait, since addEventListener('abort', ...) is a no-op on already-aborted signals and would otherwise enter the 24-hour timeout path after cancel.
    • Cleanup: session.disconnect() and client.stop() each in their own try/catch-log so a cleanup throw can't replace the primary result / friendly error.
    • Flips sawAssistantContent = true when emitting the final-message fallback, preventing a spurious session.error warning from double-firing.
    • Trims whitespace on model strings before assigning to SessionConfig.model.
    • Replaces existsSync() with isExecutableFile() (isFile + exec bit on posix, isFile on win32) so env/config/vendor overrides fail early.
    • Binary-resolver test switched to per-test mkdtempSync + chmod 0o755 — no shared /tmp/.archon state.
    • Docs: trust-boundary note added for enableConfigDiscovery.
  7. 31d94d4d sub-agentsapplyAgents maps nodeConfig.agents to SessionConfig.customAgents with name/description/prompt/tools (allowlist) passed through. Archon fields Copilot can't represent (model, disallowedTools, skills, maxTurns) warn per-agent. Intentionally does NOT set SessionConfig.agent — Archon invokes sub-agents via the Task tool, not by switching session-level agent.

Validation Evidence

bun run validate    # ← fully green end-to-end
  • Static analysis: bun run type-check — all 10 packages exit 0; bun run lint — 0 errors 0 warnings; bun run format:check — clean.
  • Tests: bun --filter @archon/providers test passes every per-file batch.
  • New Copilot tests (9 files, 52 tests):
Test file Tests
community/copilot/config.test.ts 3 (defensive parsing)
community/copilot/binary-resolver.test.ts 10 (env/config/vendor/PATH precedence + isExecutableFile)
community/copilot/provider.test.ts 6 (happy path, streaming, resume, reasoning warning, auth error)
community/copilot/tool-restrictions.test.ts 5 (pass-through, resume path)
community/copilot/mcp-translation.test.ts 4 (omit/load/env-expand/missing-file)
community/copilot/skills-translation.test.ts 4 (absent/resolved/missing/nothing-resolves)
community/copilot/structured-output.test.ts 7 (prompt augment, valid JSON, fences, unparseable, absent, fallback-msg parse)
community/copilot/agents-translation.test.ts 7 (pass-through, per-agent warning, ordering, Task-tool contract)
community/copilot/provider-hardening.test.ts 6 (abort-before-start, model trim, fallback flag, cleanup non-masking)

Plus 6 new tests in registry.test.ts exercising Copilot registration, idempotence, capability flags, and isModelCompatible. All 12 Pi tests that exercise the extracted shared utilities via the Pi re-exports pass unchanged (byte-for-byte identical function bodies).

Not yet validated (manual): end-to-end run of .archon/workflows/e2e-copilot-smoke.yaml with a real Copilot CLI auth. The smoke workflow asserts the happy path returns COPILOT_OK; first real-timing exercise happens when a reviewer runs it locally after copilot login (or setting COPILOT_GITHUB_TOKEN / GH_TOKEN / GITHUB_TOKEN).

Security Impact

  • New permissions/capabilities? No — Copilot runs with the same process privileges as Claude / Codex / Pi (user-level, no elevation).
  • New external network calls? Yes — the Copilot CLI calls GitHub's Copilot API. Equivalent surface to the other providers' SDK calls.
  • Secrets/tokens handling changed? Tokens read from the per-request env + process.env (COPILOT_GITHUB_TOKEN, GH_TOKEN, GITHUB_TOKEN) and passed to CopilotClient({ githubToken, ... }). Not persisted by Archon. useLoggedInUser: true is the default when no token env var is present, delegating auth to the Copilot CLI's existing credential store.
  • File-system access scope changed? No — the Copilot CLI operates under cwd like other providers. enableConfigDiscovery: false is Archon's default, preventing the CLI from auto-loading repo .mcp.json / skill dirs outside nodeConfig.mcp / nodeConfig.skills (documented as a trust boundary).
  • Binary resolution hardening: isExecutableFile validates the resolved CLI path is a regular file with the exec bit set (posix), not a directory or non-executable, so env/config/vendor overrides fail early rather than passing a bad path to the SDK.

Compatibility / Migration

  • Backward compatible? Yes — purely additive. Existing workflows untouched.
  • Config/env changes? Optional — users may add assistants.copilot.model to .archon/config.yaml and supply auth via copilot login or one of three env vars. Nothing required for existing installs.
  • Database migration needed? No.
  • Shared-utility extractions are back-compat via re-exportspi/options-translator.ts and pi/provider.ts / pi/event-bridge.ts re-export the extracted functions under their historical names. No external import paths change.

Human Verification

Verified scenarios (unit tests against mocked Copilot SDK + real filesystem/env-var fixtures):

  • Assistant / reasoning / tool / tool_result / result chunk translation
  • Session resume (reuses resumeSession path)
  • Effort mapping: low|medium|high|xhigh|maxreasoningEffort; string thinking warns; off disables
  • System prompt pass-through
  • Env-var merge with caller-wins semantics
  • Binary resolver precedence: env > config > vendor-in-binary-mode > PATH > dev-undefined > throw
  • isExecutableFile rejects directories, missing paths, and non-executable files on posix
  • allowed_tools / denied_tools pass-through on both create and resume paths
  • MCP JSON load, $VAR expansion, missing-var warning, relative-path resolution, malformed-file error
  • Skill resolution from .agents/skills/ / .claude/skills/ (project or home), missing-skill warning
  • Best-effort JSON parse: valid JSON, json / fences, prose-wrapped → no structuredOutput, absent outputFormat → never set
  • Sub-agent mapping: description + prompt + tools pass through; model / disallowedTools / skills / maxTurns warn; empty object / absent → omitted; SessionConfig.agent never set
  • Abort-before-start rejects with AbortError and never enters sendAndWait
  • Model string trim on both requestOptions.model and assistantConfig.model
  • Cleanup (disconnect + stop) failures logged and swallowed, never mask the primary result or the friendly auth/model-access error
  • Final-message fallback flips sawAssistantContent, preventing double-emission of a session.error warning

What was NOT verified:

  • Live end-to-end run with real Copilot CLI auth (smoke workflow staged, waiting on reviewer)
  • Behavior against every Copilot model (gpt-5, gpt-5-mini, Claude-via-Copilot aliases)
  • Copilot SDK's hook / permission semantics under an actual permission-request event (we default to approveAll, matching Claude / Codex's bypassPermissions default)

Side Effects / Blast Radius

  • Affected subsystems: only workflows / chat sessions that opt in via provider: copilot. Claude, Codex, and Pi code paths are untouched at runtime. The two Pi shared-utility call sites go through re-exports, so Pi's observable behavior is byte-identical.
  • Bundle size: adds @github/copilot-sdk (~few MB, includes the CLI bridge types). Loaded only when CopilotProvider.sendQuery runs.
  • Registry surface: GET /api/providers returns 4 entries (claude, codex, copilot, pi). Web UI already iterates generically.
  • Logs added: copilot.binary_resolved, copilot.mcp_loaded, copilot.mcp_env_vars_missing (warn), copilot.skills_resolved, copilot.agents_registered, copilot.abort_failed (warn), copilot.disconnect_failed (warn), copilot.client_stop_errors (warn), copilot.client_stop_threw (warn). All under the provider.copilot Pino module, matching the {domain}.{action}_{state} convention.

Rollback Plan

  • Fast rollback: revert the PR merge commit on dev. Seven commits, purely additive, no schema or interface changes — revert is clean.
  • User-side rollback: remove provider: copilot from workflow YAML or assistants.copilot from .archon/config.yaml.
  • Observable failure symptoms:
    • copilot.*_failed / copilot.*_errors / copilot.*_threw log events under provider.copilot
    • User-visible ⚠️ system-chunk messages for missing skills, missing MCP env vars, unsupported thinking / effort shapes, and ignored per-agent fields
    • Workflow node failures on provider: copilot nodes with friendly messages for auth errors, model-access errors, or binary-not-found

Risks and Mitigations

  • Risk: @github/copilot-sdk is pre-1.0 (v0.2.2) — breaking changes possible on minor bumps.
    • Mitigation: pinned ^0.2.2. The 52 new Copilot tests plus type-check gate any API-surface drift before merge.
  • Risk: Generator abandonment without an AbortSignal can leave a Copilot session holding resources until sendAndWait resolves (up to 24 h).
    • Mitigation: the dag-executor (primary caller) always threads an AbortSignal, limiting blast radius. A follow-up can add an internal AbortController wired into the generator's finally to cover the abandonment path. Tracked in a review comment on this PR.
  • Risk: Best-effort structured output depends on model instruction-following, not SDK enforcement.
  • Risk: enableConfigDiscovery: true would let the Copilot CLI load repo-level config (MCP servers, skill dirs) outside Archon's workflow validation surface.
    • Mitigation: defaults to false. Docs carry an explicit trust-boundary note (packages/docs-web/.../ai-assistants.md).
  • Risk: Cross-provider coupling — loadMcpConfig is imported directly from Claude's module instead of extracted to shared/.
    • Mitigation: loadMcpConfig is a public re-export in packages/providers/src/claude/index.ts, not an internal detail. By CLAUDE.md's Rule of Three, extraction to shared/mcp.ts can wait until a third provider needs it.

🤖 Generated with Claude Code

Summary by CodeRabbit

Release Notes

  • New Features

    • Added GitHub Copilot as a supported AI assistant
    • Introduced script nodes for deterministic TypeScript/Python execution in workflows
    • Approval gates now capture optional rejection reasons passed to rejection prompts
    • Workflow tags enable organization and categorization
    • Web UI node deletion via context menu and header button
  • Bug Fixes

    • Improved worktree registration error messages for stale symlinks
    • Discord adapter failures no longer block server startup
  • Documentation

    • Added Copilot provider setup guide and script nodes documentation
    • Expanded workflow authoring best practices and troubleshooting guides

Wirasm and others added 2 commits April 12, 2026 12:19
New community provider wired through @github/copilot-sdk. Registered as
builtIn: false alongside Pi. Covers session resume, effort/thinking
controls, envInjection, and a config-aware binary resolver (env >
config > vendor > PATH, with vendor winning over PATH in binary mode).

Advanced workflow features (MCP, skills, tool restrictions, structured
output, fallback model, sandbox) are intentionally flagged false —
they are not wired to Archon's workflow surface yet.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 21, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This release introduces GitHub Copilot as a new community provider with full SDK integration, refactors the Pi provider for lazy loading, adds shared structured-output and skills utilities, extends DAG workflows with script/approval/cancel nodes, implements approval auto-resume on web, and includes extensive documentation updates and release infrastructure improvements. Version bumped to 0.3.9.

Changes

Cohort / File(s) Summary
GitHub Copilot Community Provider
packages/providers/src/community/copilot/*, packages/providers/package.json, packages/providers/src/index.ts
New CopilotProvider implementation with binary resolver, session management, streaming chunks, tool restrictions, MCP/skills translation, structured output, and capabilities matrix. Adds @github/copilot-sdk dependency and exports Copilot provider API surface.
Copilot Provider Tests
packages/providers/src/community/copilot/*.test.ts
Comprehensive test coverage for binary resolution, config parsing, provider streaming, MCP/skills/agents/tool-restriction translation, structured output, provider hardening, and session resumption scenarios.
Pi Provider Refactoring
packages/providers/src/community/pi/provider.ts, packages/providers/src/community/pi/options-translator.ts, packages/providers/src/community/pi/event-bridge.ts
Lazy-loads Pi SDK via dynamic imports at sendQuery() time, adds package dir shim for compiled deployments, delegates model lookup to dynamic ModelRegistry, extracted shared utilities, and improves auth handling with structured logging.
Pi Provider Lazy-Load Verification
packages/providers/src/community/pi/provider-lazy-load.test.ts
New regression test ensuring Pi SDK packages are not eagerly imported during module registration.
Shared Structured-Output & Skills Utilities
packages/providers/src/shared/structured-output.ts, packages/providers/src/shared/skills.ts
New shared modules for JSON schema prompt augmentation and structured-output parsing, plus provider-agnostic skill directory resolution by name, extracted for reuse across providers.
Provider Registry & Tests
packages/providers/src/registry.ts, packages/providers/src/registry.test.ts, packages/providers/src/claude/binary-resolver.*, packages/providers/src/codex/binary-resolver.*
Added Copilot provider registration, extended Claude/Codex binary resolvers to support home-directory autodetection, added comprehensive resolver tests for multiple fallback paths.
Workflow Schema Extensions
packages/workflows/src/schemas/workflow.ts, packages/workflows/src/loader.ts, packages/workflows/src/executor-shared.ts, packages/workflows/src/dag-executor.ts
Added script node support, approval node support, cancel node support, workflow-level tags, completion signal detection for multiple formats, MCP failure filtering, streaming policy for paused status, and updated loop handling.
Workflow Tests
packages/workflows/src/loader.test.ts, packages/workflows/src/dag-executor.test.ts, packages/workflows/src/executor-shared.test.ts, packages/workflows/src/defaults/bundled-defaults.test.ts
Extended test coverage for script/approval/cancel node parsing, tag normalization, MCP filtering, completion signals, loop/DAG finalization, and bundled workflow validation.
Orchestrator & Auto-Resume
packages/core/src/orchestrator/orchestrator*.ts, packages/server/src/routes/api.ts, packages/server/src/routes/api.workflow-runs.test.ts
Added parentConversationId to workflow execution for web approval auto-resume, implemented tryAutoResumeAfterGate() helper for orchestrator dispatch on approval/rejection, extended API approval/reject handlers to support optional rejection reasons with on-reject prompt wiring.
Web UI Approval & Node Management
packages/web/src/components/chat/WorkflowProgressCard.tsx, packages/web/src/components/dashboard/*, packages/web/src/hooks/useBuilderKeyboard.*
Updated rejection flows to capture optional reason text, added ConfirmRunActionDialog for rejection reason input, implemented node context-menu deletion in WorkflowCanvas, added keyboard shortcut support (Delete/Backspace) with input-target awareness, extracted testable keyboard handler functions.
Workflow Metadata & Tags
packages/web/src/lib/workflow-metadata.*, packages/web/src/components/workflows/WorkflowCard.tsx, packages/web/src/lib/api.generated.d.ts
Added explicit tags field to workflow definitions, updated getWorkflowTags to honor explicit tags with inference fallback, updated generated OpenAPI types for workflow tags and health-response platforms list.
Release & CI Improvements
scripts/build-binaries.sh, .github/workflows/release.yml, CHANGELOG.md, homebrew/archon.rb
Removed bytecode generation flag from binary builds, updated smoke tests to use --no-worktree flag, added release notes for 0.3.9/0.3.8/0.3.7, updated Homebrew formula checksums.
Release Skill & Workflow Documentation
.claude/skills/release/SKILL.md, .claude/skills/test-release/SKILL.md
Enhanced release skill with compiled-binary smoke test (Step 1.5), improved CI failure detection and rerun logic, added recovery procedures; updated test-release skill with archon-stable symlink preservation for brew installations.
Maintainer Workflows & Commands
.archon/workflows/maintainer/*, .archon/commands/maintainer-*, .archon/scripts/maintainer-standup-*, .archon/maintainer-standup/*
New maintainer-review-pr workflow with gating, aspect-specific reviews, synthesis, and auto-commenting; new maintainer-standup workflow with git/GitHub data gathering, triage, and state persistence; supporting scripts and documentation.
Workflow-Builder & Node Documentation
packages/docs-web/src/content/docs/guides/script-nodes.md, packages/docs-web/src/content/docs/book/dag-workflows.md, .claude/skills/archon/*, .archon/workflows/defaults/archon-*.yaml
Comprehensive guides for script nodes (bun/uv execution), approval gates, cancellation, tags, good practices, parameter matrix, troubleshooting, and repo init; updated default workflow templates with model references and artifact management.
E2E Test Workflows
.archon/workflows/test-workflows/e2e-copilot-*.yaml, .archon/workflows/test-workflows/e2e-minimax-smoke.yaml
New Copilot smoke/feature test workflows, updated Pi provider E2E tests for model validation and session verification.
Configuration & Miscellaneous
package.json, packages/*/package.json, eslint.config.mjs, .gitignore, CLAUDE.md, CONTRIBUTING.md, packages/server/src/index.ts, packages/web/src/routes/DashboardPage.tsx
Version bumps to 0.3.9 across all packages, axios override, ESLint ignores, maintainer-standup artifacts in .gitignore, PR template guidance, Discord adapter fault tolerance with continued-operation mode, rejection handler refactoring.

Sequence Diagram(s)

sequenceDiagram
    participant Web as Web UI
    participant API as API Server
    participant Orch as Orchestrator
    participant Exec as Workflow<br/>Executor

    Web->>API: PUT /approve (with reason?)
    activate API
    API->>API: Record approval event
    API->>API: Update workflow state
    
    alt Has parent_conversation_id
        API->>API: Fetch parent conversation
        alt Parent is web platform
            API->>Orch: Dispatch resume message<br/>(/workflow run ...)
            Note over API,Orch: tryAutoResumeAfterGate()
            Orch->>Exec: Execute workflow<br/>(with parent context)
            Exec-->>Orch: Results
            Orch-->>API: Dispatch complete
            API-->>Web: "Resuming workflow."
        else Non-web parent
            API-->>Web: "Send a message to continue."
        end
    else No parent context
        API-->>Web: "Send a message to continue."
    end
    deactivate API
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

This PR introduces substantial new functionality (GitHub Copilot provider with 600+ lines of implementation and tests), refactors existing critical provider code (Pi lazy-loading), extends workflow schema with new node types and capabilities, modifies approval/orchestration flow for auto-resume, updates Web UI components for node management and rejection workflow, and includes extensive documentation and infrastructure changes. The changes are heterogeneous, affecting multiple subsystems with varying complexity and density (provider internals, workflow engine, server API, Web UI, documentation). The variety of file types and reasoning patterns (provider SDK integration, workflow DSL schema, server orchestration, React components, Bash scripts, documentation, YAML workflows) compounds review effort significantly.

Poem

🐰 A copilot now soars through workflows so grand,
Scripts run without AI, by developer's hand.
Approvals auto-resume with reason and care,
Tags float on workflows—oh what a fair affair!
From Pi's lazy loading to Opus's new call,
This release hops forward, delighting us all! 🌟

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch emdash/add-copilot-2er

devin-ai-integration[bot]

This comment was marked as resolved.

Translates nodeConfig.allowed_tools/denied_tools to Copilot SessionConfig
availableTools/excludedTools. SDK enforces availableTools precedence when
both are set. Also refactors sendQuery to route all SessionConfig
construction through buildSessionConfig() with a ProviderWarning collector,
so subsequent parity phases (MCP, skills, structured output, agents,
fallbackModel) can plug in as single applyX() calls.

Flips COPILOT_CAPABILITIES.toolRestrictions = true.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
coderabbitai[bot]

This comment was marked as resolved.

popemkt and others added 3 commits April 21, 2026 22:23
Reuses Claude's loadMcpConfig() to parse the MCP JSON file referenced by
nodeConfig.mcp, expand $ENV_VAR references, and assign the result to
Copilot's SessionConfig.mcpServers. Missing env vars surface as a
system warning chunk; IO/JSON errors propagate.

Flips COPILOT_CAPABILITIES.mcp = true.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Extracts Pi's resolvePiSkills into a provider-agnostic shared utility at
packages/providers/src/shared/skills.ts as resolveSkillDirectories. Pi
re-exports it under the historical name for back-compat.

Copilot's applySkills() maps nodeConfig.skills (names) → absolute skill
directory paths → SessionConfig.skillDirectories. Missing names surface
as a single system warning chunk.

Flips COPILOT_CAPABILITIES.skills = true.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copilot has no native JSON-mode equivalent to Claude's outputFormat or
Codex's outputSchema, so this mirrors Pi's best-effort approach: augment
the user prompt with a "respond with JSON matching this schema"
instruction, accumulate the assistant transcript, and parse it on the
terminal result chunk. Parse failure leaves structuredOutput unset — the
dag-executor's existing dag.structured_output_missing warning handles
downstream degradation.

Also extracts augmentPromptForJsonSchema and tryParseStructuredOutput
into packages/providers/src/shared/structured-output.ts. Pi re-exports
them under the original paths for back-compat.

Flips COPILOT_CAPABILITIES.structuredOutput = true.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
devin-ai-integration[bot]

This comment was marked as resolved.

popemkt and others added 2 commits April 21, 2026 22:46
- Abort-guard: check abortSignal.aborted BEFORE awaiting sendAndWait,
  since addEventListener('abort', ...) is a no-op on already-aborted
  signals and would otherwise enter the 24h timeout path after cancel.
- Cleanup: wrap session.disconnect() and client.stop() in independent
  try/catch-log blocks in the finally so a cleanup throw can't replace
  a successful result or the friendly auth/model error.
- Fallback-content flag: flip sawAssistantContent = true when emitting
  the final-message fallback, so a session.error received on the same
  turn doesn't double-emit as a spurious system warning (Devin finding).
- Model trim: strip leading/trailing whitespace on requestOptions.model
  and copilotConfig.model before assigning to SessionConfig.model.
- Binary resolver: replace existsSync() with isExecutableFile() (isFile
  + exec bit on posix, isFile on win32) so env/config/vendor overrides
  fail early instead of passing a directory or non-executable to the
  SDK.
- Binary resolver test: per-test tmpdir + chmod 0o755, no more shared
  /tmp/.archon state.
- Docs: trust-boundary note that enableConfigDiscovery = true bypasses
  Archon's workflow validation surface for MCP/skills.
- Registry test: drop stale "Pi is currently the only community
  provider" comment now that Copilot is bundled too.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Maps nodeConfig.agents (Record<name, AgentDef>) to Copilot
SessionConfig.customAgents. Direct pass-through for the fields Copilot's
CustomAgentConfig supports: name (from the map key), description, prompt,
and tools (allowlist — Copilot has no per-agent denylist).

Archon agent fields Copilot cannot represent (model, disallowedTools,
skills, maxTurns) surface as one consolidated system-warning chunk per
agent. We deliberately do NOT set SessionConfig.agent: Archon's
workflow model invokes sub-agents via the Task tool, not by switching
the active agent at session start.

Flips COPILOT_CAPABILITIES.agents = true.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@popemkt popemkt changed the title feat(providers): add GitHub Copilot community provider feat(providers): add GitHub Copilot community provider (@github/copilot-sdk) Apr 22, 2026
Wirasm and others added 6 commits April 22, 2026 08:47
…eam00#1126) (coleam00#1184)

* fix: detect completion signal in any XML tag, not just <promise> (coleam00#1126)

Loop nodes with `until:` reported max_iterations_reached when the AI wrapped
the completion signal in XML tags other than `<promise>` (e.g.,
`<COMPLETE>ALL_CLEAN</COMPLETE>`). The three existing regex patterns all missed
this format, causing the loop to exhaust iterations and fail.

Changes:
- Add generic XML-wrapped signal pattern to `detectCompletionSignal`
- Extend `stripCompletionTags` to strip matched XML-wrapped signals from output
- Pass `loop.until` to `stripCompletionTags` call site in dag-executor
- Add unit tests for detection and stripping of XML-wrapped signals
- Add integration test for loop completing on final iteration with XML tags

Fixes coleam00#1126

* fix: address review findings for completion signal detection

- Update detectCompletionSignal JSDoc to document all three detection formats
- Update stripCompletionTags JSDoc to mention the `until` parameter
- Remove superfluous `m` flag from xmlWrappedPattern (no anchors, no effect)
- Document that XML tag names are matched independently (intentional permissiveness)
- Add test: detects signal in mismatched XML tags (permissive behavior)
- Add test: strips both <promise> and XML-tagged signal in same chunk
- Add assertion in DAG integration test that raw XML tags don't appear in sent messages

* simplify: reduce complexity in changed files

* fix: require matching XML tag names in completion-signal detection

Follow-up to the initial broadening in this PR. The first version of the
regex accepted mismatched open/close tags (e.g. `<COMPLETE>X</done>`)
which was a small false-positive surface when the AI interleaves tags
in prose. Tightens both detectCompletionSignal and stripCompletionTags
to capture the tag name and enforce it on the close via \1
backreference. Case-insensitivity on the tag name is preserved.

Test updates:
- Flip the "permissive mismatch" case to assert strict rejection with a
  comment explaining the guard.
- Add a case-insensitive matching case to lock that behavior in.

No behavior change for workflows that use matching tags (the
overwhelming common case) or for <promise>...</promise>. Behavior change
is limited to the narrow "open tag and close tag disagree" case, which
only happens when the AI is confused — in which case we'd rather report
max_iterations_reached and let the author inspect than silently call
the loop complete.
…oleam00#1113)

* fix(web): allow deleting nodes from Workflow Builder (coleam00#971)

Three independent gaps prevented users from deleting nodes added to the
Workflow Builder canvas: dropped nodes were never auto-selected so
keyboard shortcuts silently no-oped, no right-click context menu
existed, and the Delete Node button was buried in the Advanced tab
(hidden below the viewport for Prompt/Command, completely absent for
Bash since bash nodes have no Advanced tab).

Fixes coleam00#971.

* fix(web): push undo snapshot before adding nodes on canvas

Call onPushSnapshot() before setNodes() in both onDrop and quick-add
handlers so that node additions are captured by undo/redo history.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(web): address PR coleam00#1113 review feedback

- Hold nodes/edges in refs so handleNodeDeleteById and onPushSnapshot
  can't capture stale pre-drop state (fixes undo-stack correctness).
- Clamp context-menu x/y to viewport so right-click near edges stays
  fully on-screen.
- Drop non-conformant role=menu/menuitem from the single-item context
  menu; rely on the native button for accessibility.
- Extend isInputTarget() to cover ARIA combobox/textbox/searchbox so
  Backspace in Radix/shadcn widgets never nukes a node.
- Extract handleBuilderKeydown as a pure function and add tests
  covering the Delete/Backspace + isInputTarget invariant.
- Remove issue-number references from code comments per CLAUDE.md.
- Document the new delete affordances in the Workflow Builder docs.
- Inline context-menu dismissal, rename pointer handler, drop unused
  deps in keyboardActions useMemo.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
coleam00#1155)

* fix(workflows): make adversarial init sed portable on macOS

* chore: regenerate bundled-defaults after adversarial-dev sed fix

Sync generated bundle with the new temp-file sed pattern in
archon-adversarial-dev.yaml so check:bundled passes and binary
distributions ship the macOS-safe version.

---------

Co-authored-by: laplace young <yangqk12@whu.edu.cn>
Co-authored-by: Rasmus Widing <rasmus.widing@gmail.com>
…coleam00#1327)

* fix(workflows): filter user-plugin MCP noise out of workflow warnings

Before this change, the dag-executor surfaced every entry in the Claude
SDK's "MCP server connection failed: …" system message to the user. That
message includes user-level plugin MCPs inherited from ~/.claude/ (e.g.
`telegram`) that fail to connect in the headless workflow subprocess —
they're non-actionable noise for the workflow author.

Fix:
- Pre-compute the set of workflow-configured MCP server names per node
  by parsing the `mcp:` config file once at the start of
  executeNodeInternal. No caller-facing API change; no duplication of
  the provider's env-var expansion logic (we only need the keys).
- Split the system-message handler: the `MCP server connection failed:`
  path now surfaces only the subset of failing names that match the
  node's configured set; user-plugin failures are debug-logged as
  `dag.mcp_plugin_connection_suppressed`. The `⚠️` branch is unchanged.

Supersedes coleam00#1134 (closed as stale — the Windows HOME fix in that PR was
already shipped via coleam00#1302, and the claude.ts enabledPlugins change
targeted a file that has since moved into @archon/providers).

Credits @MrFadiAi for identifying and reporting the underlying issue.

Co-authored-by: Fadi Ai <MrFadiAi@users.noreply.github.com>

* fix(workflows): preserve MCP failure status in filtered message + observability

Address review feedback on PR coleam00#1327:

- parseMcpFailureServerNames now returns {name, segment} entries so the
  forwarded "MCP server connection failed: ..." message preserves the
  per-server status detail (e.g. "(timeout)", "(disconnected)") that the
  bare-name reconstruction was dropping.
- loadConfiguredMcpServerNames now debug-logs read failures as
  dag.mcp_filter_config_read_failed instead of swallowing them silently.
  A transient EMFILE/EBUSY at filter time would otherwise silently
  reclassify a real workflow-MCP failure as plugin noise.
- Add 4 integration tests through executeDagWorkflow covering the mixed
  workflow/plugin split, all-plugin suppression, no-mcp:-config nodes,
  and the unchanged ⚠️ warning path.
- Drop a WHAT comment above configuredMcpNames and a temporal phrase
  ("before this filter landed") that would rot on merge.
- Document the filtering boundary in guides/mcp-servers.md and add a
  troubleshooting row for users debugging silently-suppressed plugin MCPs.

---------

Co-authored-by: Fadi Ai <MrFadiAi@users.noreply.github.com>
Replace the real-filesystem vendor binary with spyOn(isExecutableFile) +
path-fragment assertion, matching the sibling Codex resolver test. The
previous test hardcoded the posix binary name 'copilot' while the resolver
looks for 'copilot.exe' on win32, so the vendor branch never matched on
Windows and the test then leaked to the system-installed Copilot CLI via
PATH.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…oleam00#1330)

axios <1.15.0 can be coerced to bypass NO_PROXY rules via hostname
normalization, enabling SSRF in the right network shape. Archon pulls
axios transitively through @slack/bolt (^1.12.0) and @slack/web-api
(^1.13.5); before this change bun.lock resolved axios@1.13.6 — within
the vulnerable range.

Adding "axios": "^1.15.0" to the root package.json overrides bumps the
transitive resolution to axios@1.15.1 (latest compatible 1.x). Both
Slack range specs accept it without API surface changes — no downstream
code touches axios directly.

Supersedes coleam00#1153. Credits @stefans71 for identifying and reporting the
vulnerability; their PR was stale on the lockfile (0.3.5 → 0.3.6 drift
on dev), so this is a fresh one-line re-do on current dev.

Closes coleam00#1053.

Co-authored-by: Stefans71 <stefans71@users.noreply.github.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (5)
packages/providers/package.json (1)

22-22: Test script is getting unwieldy — consider consolidating.

The test script now chains ~25 individual bun test <file> invocations with &&, which fails-fast on the first failing suite (so later suites won't run and give feedback) and keeps growing with each new test file. Bun supports passing multiple file arguments or a glob in a single invocation.

♻️ Suggested simplification
-    "test": "bun test src/claude/provider.test.ts && bun test src/codex/provider.test.ts && bun test src/registry.test.ts && bun test src/codex/binary-guard.test.ts && bun test src/codex/binary-resolver.test.ts && bun test src/codex/binary-resolver-dev.test.ts && bun test src/claude/binary-resolver.test.ts && bun test src/claude/binary-resolver-dev.test.ts && bun test src/community/pi/model-ref.test.ts && bun test src/community/pi/config.test.ts && bun test src/community/pi/event-bridge.test.ts && bun test src/community/pi/options-translator.test.ts && bun test src/community/pi/session-resolver.test.ts && bun test src/community/pi/provider.test.ts && bun test src/community/copilot/config.test.ts && bun test src/community/copilot/binary-resolver.test.ts && bun test src/community/copilot/provider.test.ts && bun test src/community/copilot/tool-restrictions.test.ts && bun test src/community/copilot/mcp-translation.test.ts && bun test src/community/copilot/skills-translation.test.ts && bun test src/community/copilot/structured-output.test.ts && bun test src/community/copilot/provider-hardening.test.ts && bun test src/community/copilot/agents-translation.test.ts",
+    "test": "bun test src/**/*.test.ts",

If the original sequencing matters (e.g., registry tests relying on other files' mocks having run first), the per-file flow can be preserved while still running the whole set via a single bun process.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/package.json` at line 22, The test script in package.json
currently chains many individual "bun test <file>" commands with &&; replace
this with a single bun invocation that accepts multiple files or a glob to run
all tests in one process (e.g., use bun test "src/**/*.test.ts" or pass multiple
file args to bun test) to simplify maintenance and avoid the long && chain;
update the "test" npm script entry (the "test" property) to use that single bun
test command while preserving any required test ordering if necessary.
packages/providers/src/community/pi/event-bridge.ts (1)

154-158: Hoist the import to the top of the file.

Same note as in packages/providers/src/community/pi/provider.ts: the mid-file import { tryParseStructuredOutput } from '../../shared/structured-output'; is purely stylistic — ES imports are hoisted — but it tends to trip lint rules and makes the module's dependency graph harder to scan. Prefer a top-of-file import plus either a separate export { tryParseStructuredOutput } there or a single export { tryParseStructuredOutput } from '../../shared/structured-output'; re-export to preserve the back-compat surface.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/event-bridge.ts` around lines 154 - 158,
Move the inline import of tryParseStructuredOutput to the top of the module and
replace the mid-file import+export with a top-level re-export; specifically,
remove the mid-file "import { tryParseStructuredOutput } ..." and instead add
either a top-of-file "import { tryParseStructuredOutput }" followed by an
"export { tryParseStructuredOutput }" or a single top-level "export {
tryParseStructuredOutput } from '...';" so the symbol tryParseStructuredOutput
is declared/re-exported at the module root for linters and clearer dependency
scanning.
packages/providers/src/community/pi/provider.ts (1)

68-72: Move the import to the top of the file.

Placing import { augmentPromptForJsonSchema } from '../../shared/structured-output'; in the middle of the module is non-idiomatic — ES module imports are hoisted anyway, so there's no semantic benefit from the placement, and most lint configs (import/first, @typescript-eslint conventions) will flag it. Move the import up with the other imports and keep the export { augmentPromptForJsonSchema }; re-export alongside the comment at the current location if you want the "shared-but-re-exported" signal, or convert to a single top-level export { augmentPromptForJsonSchema } from '../../shared/structured-output';.

♻️ Proposed refactor

At the top of the file, alongside the existing imports:

 import { createArchonUIBridge, createArchonUIContext } from './ui-context-stub';
+import { augmentPromptForJsonSchema } from '../../shared/structured-output';

And replace lines 68-72 with a single re-export:

-// Structured-output prompt augmentation is shared across providers. Import
-// once for local use and re-export so existing callers and tests keep their
-// import path stable; new providers should import from `../../shared/structured-output`.
-import { augmentPromptForJsonSchema } from '../../shared/structured-output';
-export { augmentPromptForJsonSchema };
+// Structured-output prompt augmentation is shared across providers. Re-exported
+// here so existing Pi callers/tests keep their import path stable; new providers
+// should import from `../../shared/structured-output`.
+export { augmentPromptForJsonSchema } from '../../shared/structured-output';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.ts` around lines 68 - 72, Move
the import of augmentPromptForJsonSchema to the top with the other imports and
replace the mid-file import with a re-export; specifically, remove the inline
"import { augmentPromptForJsonSchema } from '../../shared/structured-output';"
from the middle of the module, add the import or single-line re-export at the
top alongside other imports, and keep or add "export {
augmentPromptForJsonSchema }" (or use "export { augmentPromptForJsonSchema }
from '../../shared/structured-output';") to preserve the public API; update
references to augmentPromptForJsonSchema accordingly.
packages/providers/src/shared/structured-output.ts (1)

51-54: Fence stripping accepts only json or unlabeled fences.

The regex ^\``(?:json)?\s*\n?strips ```` ``` ```` and ```` ```json ```` but leaves other labels (e.g. ```` ```javascript ````, ```` ```ts ````) in place, which will causeJSON.parseto fail and returnundefined. That's a safe degradation, but if instruction-following drift produces labels like javascriptortypescript` it will silently miss the structured output. Consider widening the match to any identifier, since the fence context already implies the content is supposed to be the JSON:

♻️ Suggested tweak
-    .replace(/^```(?:json)?\s*\n?/i, '')
+    .replace(/^```[\w-]*\s*\n?/i, '')

This accepts any single-word language tag (including empty) without compromising the boundary anchoring.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/shared/structured-output.ts` around lines 51 - 54, The
opening-fence strip in the cleaned variable only matches unlabeled or "json"
fences; update the first .replace used to build cleaned (the one stripping the
opening ``` fence) so it accepts any single-word language tag
(letters/digits/underscore/hyphen) or no tag instead of only "json", while
preserving the start-anchor and optional trailing newline/whitespace; keep the
existing closing-fence strip unchanged so the overall trimming still removes
fenced blocks before JSON.parse.
packages/providers/src/community/copilot/tool-restrictions.test.ts (1)

110-123: Add a test case for overlapping allow/deny lists to verify documented SDK precedence.

The current test uses disjoint lists and doesn't validate the documented behavior where availableTools takes precedence when a tool appears in both lists. Add a case with the same tool in both allowed_tools and denied_tools to ensure the provider correctly propagates this conflict to the SDK, which will handle it per its documented contract.

Suggested test case
   test('passes both through when both present (SDK enforces availableTools precedence)', async () => {
     await drain(
       new CopilotProvider().sendQuery('hi', '/repo', undefined, {
         model: 'gpt-5',
         nodeConfig: {
           allowed_tools: ['read_file'],
           denied_tools: ['shell'],
         },
       })
     );

     const cfg = capturedSessionConfigs[0] ?? {};
     expect(cfg.availableTools).toEqual(['read_file']);
     expect(cfg.excludedTools).toEqual(['shell']);
   });
+
+  test('defines behavior when the same tool is both allowed and denied', async () => {
+    await drain(
+      new CopilotProvider().sendQuery('hi', '/repo', undefined, {
+        model: 'gpt-5',
+        nodeConfig: {
+          allowed_tools: ['shell', 'read_file'],
+          denied_tools: ['shell'],
+        },
+      })
+    );
+
+    const cfg = capturedSessionConfigs[0] ?? {};
+    expect(cfg.availableTools).toEqual(['shell', 'read_file']);
+    expect(cfg.excludedTools).toEqual(['shell']);
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/copilot/tool-restrictions.test.ts` around
lines 110 - 123, Add a new test that sends a query via CopilotProvider.sendQuery
with the same tool in both nodeConfig.allowed_tools and nodeConfig.denied_tools
(e.g., 'read_file') and then assert that the capturedSessionConfigs entry for
that session still contains both availableTools and excludedTools entries
listing that tool (e.g., expect(cfg.availableTools).toEqual(['read_file']) and
expect(cfg.excludedTools).toEqual(['read_file'])); this verifies the provider
propagates overlapping allow/deny lists to the SDK which enforces the documented
precedence.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/providers/src/community/copilot/provider-hardening.test.ts`:
- Around line 108-123: Test currently only ensures sendAndWait is not called but
doesn't ensure the provider avoids creating an external Copilot session; update
CopilotProvider.sendQuery to check abortSignal?.aborted at the very start and
throw an AbortError before any session setup or external calls, and add/adjust
the test to assert that any session-creation mock (e.g., mockCreateSession or
whatever function initializes a Copilot session) is not called in addition to
mockSendAndWait remaining at 0 calls.

In `@packages/providers/src/community/copilot/skills-translation.test.ts`:
- Around line 123-127: The assertions currently use toContain which allows
partial matches; tighten them to assert exact absolute paths by replacing the
two toContain checks with exact equality checks against resolved absolute paths
(e.g., compute expectedAlpha = path.resolve(join('.agents','skills','alpha'))
and expectedBeta = path.resolve(join('.claude','skills','beta')) and then assert
dirs?.[0] === expectedAlpha and dirs?.[1] === expectedBeta using toBe or
toEqual). Update the test variables capturedSessionConfigs, cfg.skillDirectories
and the two expectations accordingly so the test fails on relative-path
regressions.

In `@packages/providers/src/shared/skills.ts`:
- Around line 62-74: The loop that builds candidate = join(root, rawName) in the
skill resolution (inside the for...of skillNames loop, using seen, roots,
candidate, existsSync(join(candidate, 'SKILL.md'))) is vulnerable to
path-traversal via rawName; validate/sanitize rawName before joining: reject or
skip names that are absolute or contain ".." or path separators that would
escape the root, or safer — compute candidate = resolve(root, rawName) and then
ensure path.relative(root, candidate) does not start with ".." and candidate is
not absolute; only then call existsSync(join(candidate, 'SKILL.md')). Apply this
check where candidate is constructed so malicious rawName values cannot probe
outside the intended roots.

---

Nitpick comments:
In `@packages/providers/package.json`:
- Line 22: The test script in package.json currently chains many individual "bun
test <file>" commands with &&; replace this with a single bun invocation that
accepts multiple files or a glob to run all tests in one process (e.g., use bun
test "src/**/*.test.ts" or pass multiple file args to bun test) to simplify
maintenance and avoid the long && chain; update the "test" npm script entry (the
"test" property) to use that single bun test command while preserving any
required test ordering if necessary.

In `@packages/providers/src/community/copilot/tool-restrictions.test.ts`:
- Around line 110-123: Add a new test that sends a query via
CopilotProvider.sendQuery with the same tool in both nodeConfig.allowed_tools
and nodeConfig.denied_tools (e.g., 'read_file') and then assert that the
capturedSessionConfigs entry for that session still contains both availableTools
and excludedTools entries listing that tool (e.g.,
expect(cfg.availableTools).toEqual(['read_file']) and
expect(cfg.excludedTools).toEqual(['read_file'])); this verifies the provider
propagates overlapping allow/deny lists to the SDK which enforces the documented
precedence.

In `@packages/providers/src/community/pi/event-bridge.ts`:
- Around line 154-158: Move the inline import of tryParseStructuredOutput to the
top of the module and replace the mid-file import+export with a top-level
re-export; specifically, remove the mid-file "import { tryParseStructuredOutput
} ..." and instead add either a top-of-file "import { tryParseStructuredOutput
}" followed by an "export { tryParseStructuredOutput }" or a single top-level
"export { tryParseStructuredOutput } from '...';" so the symbol
tryParseStructuredOutput is declared/re-exported at the module root for linters
and clearer dependency scanning.

In `@packages/providers/src/community/pi/provider.ts`:
- Around line 68-72: Move the import of augmentPromptForJsonSchema to the top
with the other imports and replace the mid-file import with a re-export;
specifically, remove the inline "import { augmentPromptForJsonSchema } from
'../../shared/structured-output';" from the middle of the module, add the import
or single-line re-export at the top alongside other imports, and keep or add
"export { augmentPromptForJsonSchema }" (or use "export {
augmentPromptForJsonSchema } from '../../shared/structured-output';") to
preserve the public API; update references to augmentPromptForJsonSchema
accordingly.

In `@packages/providers/src/shared/structured-output.ts`:
- Around line 51-54: The opening-fence strip in the cleaned variable only
matches unlabeled or "json" fences; update the first .replace used to build
cleaned (the one stripping the opening ``` fence) so it accepts any single-word
language tag (letters/digits/underscore/hyphen) or no tag instead of only
"json", while preserving the start-anchor and optional trailing
newline/whitespace; keep the existing closing-fence strip unchanged so the
overall trimming still removes fenced blocks before JSON.parse.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 147adf7f-4bce-4022-b6a6-1e57d6268099

📥 Commits

Reviewing files that changed from the base of the PR and between 9546ea7 and e0b57a8.

📒 Files selected for processing (18)
  • packages/docs-web/src/content/docs/getting-started/ai-assistants.md
  • packages/providers/package.json
  • packages/providers/src/community/copilot/agents-translation.test.ts
  • packages/providers/src/community/copilot/binary-resolver.test.ts
  • packages/providers/src/community/copilot/binary-resolver.ts
  • packages/providers/src/community/copilot/capabilities.ts
  • packages/providers/src/community/copilot/mcp-translation.test.ts
  • packages/providers/src/community/copilot/provider-hardening.test.ts
  • packages/providers/src/community/copilot/provider.ts
  • packages/providers/src/community/copilot/skills-translation.test.ts
  • packages/providers/src/community/copilot/structured-output.test.ts
  • packages/providers/src/community/copilot/tool-restrictions.test.ts
  • packages/providers/src/community/pi/event-bridge.ts
  • packages/providers/src/community/pi/options-translator.ts
  • packages/providers/src/community/pi/provider.ts
  • packages/providers/src/registry.test.ts
  • packages/providers/src/shared/skills.ts
  • packages/providers/src/shared/structured-output.ts
✅ Files skipped from review due to trivial changes (1)
  • packages/providers/src/community/copilot/capabilities.ts
🚧 Files skipped from review as they are similar to previous changes (4)
  • packages/docs-web/src/content/docs/getting-started/ai-assistants.md
  • packages/providers/src/community/copilot/binary-resolver.test.ts
  • packages/providers/src/community/copilot/binary-resolver.ts
  • packages/providers/src/community/copilot/provider.ts

Comment on lines +108 to +123
test('rejects early when abortSignal is already aborted', async () => {
const controller = new AbortController();
controller.abort();

const { error } = await collect(
new CopilotProvider().sendQuery('hi', '/repo', undefined, {
model: 'gpt-5',
abortSignal: controller.signal,
})
);

expect(error).toBeDefined();
expect(error?.name).toBe('AbortError');
// sendAndWait must NOT have been entered
expect(mockSendAndWait).toHaveBeenCalledTimes(0);
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Assert already-aborted requests do not create a Copilot session.

This currently only proves sendAndWait is skipped. For a true early-abort guard, the provider should avoid the external session setup path too.

Proposed test assertion
     expect(error).toBeDefined();
     expect(error?.name).toBe('AbortError');
+    expect(mockCreateSession).toHaveBeenCalledTimes(0);
     // sendAndWait must NOT have been entered
     expect(mockSendAndWait).toHaveBeenCalledTimes(0);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test('rejects early when abortSignal is already aborted', async () => {
const controller = new AbortController();
controller.abort();
const { error } = await collect(
new CopilotProvider().sendQuery('hi', '/repo', undefined, {
model: 'gpt-5',
abortSignal: controller.signal,
})
);
expect(error).toBeDefined();
expect(error?.name).toBe('AbortError');
// sendAndWait must NOT have been entered
expect(mockSendAndWait).toHaveBeenCalledTimes(0);
});
test('rejects early when abortSignal is already aborted', async () => {
const controller = new AbortController();
controller.abort();
const { error } = await collect(
new CopilotProvider().sendQuery('hi', '/repo', undefined, {
model: 'gpt-5',
abortSignal: controller.signal,
})
);
expect(error).toBeDefined();
expect(error?.name).toBe('AbortError');
expect(mockCreateSession).toHaveBeenCalledTimes(0);
// sendAndWait must NOT have been entered
expect(mockSendAndWait).toHaveBeenCalledTimes(0);
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/copilot/provider-hardening.test.ts` around
lines 108 - 123, Test currently only ensures sendAndWait is not called but
doesn't ensure the provider avoids creating an external Copilot session; update
CopilotProvider.sendQuery to check abortSignal?.aborted at the very start and
throw an AbortError before any session setup or external calls, and add/adjust
the test to assert that any session-creation mock (e.g., mockCreateSession or
whatever function initializes a Copilot session) is not called in addition to
mockSendAndWait remaining at 0 calls.

Comment on lines +123 to +127
const cfg = capturedSessionConfigs[0] ?? {};
const dirs = cfg.skillDirectories as string[] | undefined;
expect(dirs).toHaveLength(2);
expect(dirs?.[0]).toContain(join('.agents', 'skills', 'alpha'));
expect(dirs?.[1]).toContain(join('.claude', 'skills', 'beta'));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assert exact absolute skill directories.

This test says it verifies absolute paths, but toContain(...) would also pass for relative suffixes. Pin the full expected paths so SDK-breaking relative-path regressions fail.

Proposed assertion tightening
     const cfg = capturedSessionConfigs[0] ?? {};
     const dirs = cfg.skillDirectories as string[] | undefined;
     expect(dirs).toHaveLength(2);
-    expect(dirs?.[0]).toContain(join('.agents', 'skills', 'alpha'));
-    expect(dirs?.[1]).toContain(join('.claude', 'skills', 'beta'));
+    expect(dirs).toEqual([
+      join(workDir, '.agents', 'skills', 'alpha'),
+      join(tmpRoot, 'home', '.claude', 'skills', 'beta'),
+    ]);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const cfg = capturedSessionConfigs[0] ?? {};
const dirs = cfg.skillDirectories as string[] | undefined;
expect(dirs).toHaveLength(2);
expect(dirs?.[0]).toContain(join('.agents', 'skills', 'alpha'));
expect(dirs?.[1]).toContain(join('.claude', 'skills', 'beta'));
const cfg = capturedSessionConfigs[0] ?? {};
const dirs = cfg.skillDirectories as string[] | undefined;
expect(dirs).toHaveLength(2);
expect(dirs).toEqual([
join(workDir, '.agents', 'skills', 'alpha'),
join(tmpRoot, 'home', '.claude', 'skills', 'beta'),
]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/copilot/skills-translation.test.ts` around
lines 123 - 127, The assertions currently use toContain which allows partial
matches; tighten them to assert exact absolute paths by replacing the two
toContain checks with exact equality checks against resolved absolute paths
(e.g., compute expectedAlpha = path.resolve(join('.agents','skills','alpha'))
and expectedBeta = path.resolve(join('.claude','skills','beta')) and then assert
dirs?.[0] === expectedAlpha and dirs?.[1] === expectedBeta using toBe or
toEqual). Update the test variables capturedSessionConfigs, cfg.skillDirectories
and the two expectations accordingly so the test fails on relative-path
regressions.

Comment on lines +62 to +74
for (const rawName of skillNames) {
if (typeof rawName !== 'string' || rawName.length === 0) continue;
if (seen.has(rawName)) continue;
seen.add(rawName);

let found: string | undefined;
for (const root of roots) {
const candidate = join(root, rawName);
if (existsSync(join(candidate, 'SKILL.md'))) {
found = candidate;
break;
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Skill name is joined without path-traversal sanitization.

join(root, rawName) will happily resolve .. segments (e.g. rawName = "../../etc" → candidate outside any intended root). Because the existence check is on join(candidate, 'SKILL.md') the practical impact is bounded — an attacker would need a SKILL.md to already exist at the traversed path — but:

  1. It still lets a crafted skills: entry probe the filesystem outside the four documented roots, which leaks information via the warning string (missing list names).
  2. If a future change ever starts reading files from the resolved paths beyond SKILL.md (or passes them to a model/shell), this becomes a real traversal vector.

Skill names come from workflow YAML (usually trusted), so this is minor, but an early reject is cheap insurance:

🛡️ Suggested validation
   for (const rawName of skillNames) {
     if (typeof rawName !== 'string' || rawName.length === 0) continue;
+    // Reject path separators and traversal segments — skill names must be
+    // simple directory names within the configured roots.
+    if (rawName.includes('/') || rawName.includes('\\') || rawName === '..' || rawName === '.') {
+      missing.push(rawName);
+      continue;
+    }
     if (seen.has(rawName)) continue;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for (const rawName of skillNames) {
if (typeof rawName !== 'string' || rawName.length === 0) continue;
if (seen.has(rawName)) continue;
seen.add(rawName);
let found: string | undefined;
for (const root of roots) {
const candidate = join(root, rawName);
if (existsSync(join(candidate, 'SKILL.md'))) {
found = candidate;
break;
}
}
for (const rawName of skillNames) {
if (typeof rawName !== 'string' || rawName.length === 0) continue;
// Reject path separators and traversal segments — skill names must be
// simple directory names within the configured roots.
if (rawName.includes('/') || rawName.includes('\\') || rawName === '..' || rawName === '.') {
missing.push(rawName);
continue;
}
if (seen.has(rawName)) continue;
seen.add(rawName);
let found: string | undefined;
for (const root of roots) {
const candidate = join(root, rawName);
if (existsSync(join(candidate, 'SKILL.md'))) {
found = candidate;
break;
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/shared/skills.ts` around lines 62 - 74, The loop that
builds candidate = join(root, rawName) in the skill resolution (inside the
for...of skillNames loop, using seen, roots, candidate,
existsSync(join(candidate, 'SKILL.md'))) is vulnerable to path-traversal via
rawName; validate/sanitize rawName before joining: reject or skip names that are
absolute or contain ".." or path separators that would escape the root, or safer
— compute candidate = resolve(root, rawName) and then ensure path.relative(root,
candidate) does not start with ".." and candidate is not absolute; only then
call existsSync(join(candidate, 'SKILL.md')). Apply this check where candidate
is constructed so malicious rawName values cannot probe outside the intended
roots.

Wirasm and others added 10 commits April 22, 2026 13:15
…"not a git repo" (coleam00#1332)

* fix(cli): surface stale-workspace registration error instead of fake "not a git repo"

When workflowRunCommand auto-registers an unregistered repo, a stale
~/.archon/workspaces/<owner>/<repo>/source symlink (pointing to an old
checkout) causes createProjectSourceSymlink() in @archon/paths to throw:

  Source symlink at <linkPath> already points to <existing>, expected <target>

The CLI caught that in a try/catch, logged it at warn level, continued
with `codebase = null`, and then the isolation / resume branches hit
their "codebase missing" fallback and threw the generic:

  Cannot create worktree: not in a git repository.

That message is false — the repo is valid; the Archon workspace entry
is stale. It sends users down the wrong diagnostic path (checking git
config, permissions, etc.) instead of pointing at the workspace dir.

Fix: preserve the registration error on a new `codebaseRegistrationError`
local, and at both fallback sites (resume + worktree-creation) check it
before the generic "not a git repo" branch. When set, throw a truthful:

  Cannot {create worktree,resume}: repository registration failed.
  Error: <original message>
  Hint: Remove the stale workspace entry at <dir> and retry, or
        use --no-worktree to skip isolation.

The hint's exact path comes from a small parser that extracts the
workspace directory from the known "Source symlink at …" format; when
the message shape doesn't match (future error text changes), the parser
returns null and we fall back to a generic "check registration under
<archon-home>/workspaces" hint — safe degradation.

Regression test in workflow.test.ts asserts the new error message and
negatively asserts the old "not in a git repository" string is gone.

Supersedes coleam00#1157 — that PR was draft + CONFLICTING against current dev,
and also mentioned Windows test-compat changes that weren't in the diff
(pruned scope). This is a fresh re-do focused strictly on coleam00#1146.

Closes coleam00#1146.

Co-authored-by: Bortlesboat <Bortlesboat@users.noreply.github.com>

* review: add resume-path test, null-fallback test, update troubleshooting docs

Addresses multi-agent review feedback on this PR:

- Add regression test for the --resume fallback site (the worktree-create
  site was already covered; the resume site had identical wiring but zero
  test coverage).
- Add test for the unrecognized-error-shape branch of
  buildRegistrationFailureError so the generic workspace hint is pinned
  (prevents accidental inversion of the stale-entry vs generic-hint
  ternary).
- Update the troubleshooting page to key on the new
  "Cannot create worktree: repository registration failed." message.
  Users hitting the new error won't find the page under the old heading,
  and the "In the future..." note is obsolete now that the error itself
  contains the cleanup path.
- Trim both new docblocks: keep the load-bearing cross-package error
  string contract in extractStaleWorkspaceEntry, drop narration of what
  the code already shows. Drop the "Before this helper existed..."
  paragraph from buildRegistrationFailureError — that's CHANGELOG
  material. Drop PR-reference suffix from the test section divider.

* review: guard getArchonHome in hint + export parser for direct tests

Two follow-up fixes to the multi-agent review commit (f32f002):

CodeRabbit finding — unguarded getArchonHome() in the fallback hint.
If getArchonHome() ever throws (misconfigured env vars, permission issues
on the resolution path), the registration-failure Error would never get
constructed: we'd throw a secondary home-resolution error that masks the
root cause. Wrap the fallback branch in try/catch — prefer losing the
exact path in the hint over replacing the actionable registration error.
A safe generic hint ("Check your Archon workspace registration and retry")
takes over when getArchonHome() throws. The original error.message is
always embedded verbatim in the re-thrown Error.

S2 — export extractStaleWorkspaceEntry for direct table tests. The parser
is where the cross-package string contract with @archon/paths actually
lives; direct tests against it are cheaper than end-to-end CLI tests and
pin the edge cases:

- POSIX path with forward slashes (typical unix user)
- Windows path with backslashes (verifies Math.max(lastIndexOf / , lastIndexOf \))
- Unrelated error message (no prefix) → null
- Prefix matches but delimiter missing → null
- Source path without any separator → null (guards against returning
  empty string, which would produce a nonsense "Remove the stale
  workspace entry at " hint)
- Empty string → null

Six new cases in the test file. The claim of Windows support in the
PR description is now actually verified.

* fix(test): make generic-hint assertion path-separator agnostic

Windows test runner (CI) hit:
  Expected to contain: "Check your Archon workspace registration under /home/test/.archon/workspaces"
  Received: "... under \home\test\.archon\workspaces and retry, ..."

path.join normalizes to `\` on Windows and `/` on POSIX. The test hardcoded
forward slashes in the expected substring. Split into two separator-agnostic
asserts: the prefix up to "under", then `/workspaces\b/` regex for the final
path segment. Behavior doesn't change — the hint still gets the full
path.join'd workspaces dir on either platform.

---------

Co-authored-by: Bortlesboat <Bortlesboat@users.noreply.github.com>
…th-reason dialog (coleam00#1329)

* fix(server,web,workflows): web approval gates auto-resume + reject-with-reason dialog

Fixes three tightly-coupled bugs that made web approval gates unusable:

1. orchestrator-agent did not pass parentConversationId to executeWorkflow
   for any web-dispatched foreground / interactive / resumable run. Without
   that field, findResumableRunByParentConversation (the machinery the CLI
   relies on for resume) couldn't find the paused run from the same
   conversation on a follow-up message, and the approve/reject API handlers
   had no conversation to dispatch back to.

2. POST /api/workflows/runs/:runId/{approve,reject} recorded the decision
   and returned "Send a message to continue the workflow." — the workflow
   never actually resumed. Added tryAutoResumeAfterGate() that mirrors what
   workflowApproveCommand / workflowRejectCommand already do on the CLI:
   look up the parent conversation, dispatch `/workflow run <name>
   <userMessage>` back through dispatchToOrchestrator. Failures are
   non-fatal — the user can still send a manual message as a fallback.

3. The during-streaming cancel-check in dag-executor aborted any streaming
   node whenever the run status left 'running', including the legitimate
   transition to 'paused' that an approval node performs. A concurrent AI
   node in the same DAG layer now tolerates 'paused' and finishes its own
   stream; only truly terminal / unknown states (null, cancelled, failed,
   completed) abort the in-flight stream.

Web UI: ConfirmRunActionDialog gains an optional reasonInput prop (label +
placeholder) that renders a textarea and passes the trimmed value to
onConfirm. WorkflowRunCard (dashboard) and WorkflowProgressCard (chat)
both use it for Reject now — the chat card was still on window.confirm,
which was both inconsistent with the dashboard and couldn't collect a
reason. The trimmed reason threads through to $REJECTION_REASON in the
workflow's on_reject prompt.

Supersedes coleam00#1147. @jonasvanderhaegen surfaced the root cause and shape of
the fix; that PR was 87 commits stale and pre-dated the reject-UX upgrade
(coleam00#1261 area), so this is a fresh re-do on current dev.

Tests:
- packages/server/src/routes/api.workflow-runs.test.ts — 5 new cases:
  approve with parent dispatches; approve without parent returns "Send a
  message"; approve with deleted parent conversation skips safely; reject
  dispatches on-reject flows; reject that cancels (no on_reject) does NOT
  dispatch.
- packages/core/src/orchestrator/orchestrator.test.ts — updated the two
  synthesizedPrompt-dispatch tests for the new executeWorkflow arity.

Closes coleam00#1131.

Co-authored-by: Jonas Vanderhaegen <7755555+jonasvanderhaegen@users.noreply.github.com>

* fix: address multi-agent review findings for web approval auto-resume

C1 (critical) — cross-adapter misrouting guard
  tryAutoResumeAfterGate now checks parentConv.platform_type === 'web'
  before dispatching. Non-web parents (Slack/Telegram/GitHub/Discord)
  being approved from the dashboard skip auto-resume rather than
  dispatching a Slack thread_ts or Telegram chat_id through the web
  adapter's lock manager.

C2 (critical) — fire-and-forget dispatch replaced with await
  void dispatchToOrchestrator() meant the "Resuming workflow." response
  fired before async work completed, and the outer try/catch couldn't
  observe dispatch failures. Changed to await; response now accurately
  reflects dispatch outcome.

I1 — replaced logPrefix string-template (which produced 3-segment
  api.workflow_*.dispatched event names violating {domain}.{action}_{state})
  with literal event names per action, branched inside the helper.
  Accepts action: 'approve' | 'reject' instead.

I2 — corrected misleading "foreground/interactive" qualifier in the
  approve-endpoint comment; background web dispatches also set
  parent_conversation_id via the pre-created run, so they auto-resume too.

I3 — extracted shouldContinueStreamingForStatus() as a small exported
  policy and added 7 unit tests covering running/paused/null/cancelled/
  failed/completed/unknown. Full-integration coverage of the paused-
  tolerance invariant would require manipulating the 10s
  CANCEL_CHECK_INTERVAL_MS, which is flaky-prone; unit test of the
  policy function captures the same invariant deterministically.

I4 — updated approval-nodes.md and authoring-workflows.md to reflect
  that Web UI approve/reject now auto-resumes (no "send a follow-up
  message" copy), documented the reject-with-reason dialog and
  $REJECTION_REASON flow, and called out the cross-platform caveat.

S1 — rewrote streaming status check as positive shouldContinue safe-list
  via the extracted policy function, matching the inline comment.

S2 — inlined handleReject on the dashboard rather than squeezing
  rejectWorkflowRun through runAction with a closure; keeps runAction
  narrow for the single-arg lifecycle actions.

S5 — new regression test covering the non-web-parent skip path
  (slack-platform parent → dispatch skipped → response falls back to
  "Send a message to continue").

S6 — removed stale reference to runAction in ConfirmRunActionDialog's
  onConfirm JSDoc (no longer accurate now that WorkflowProgressCard
  calls the dialog without runAction).

S7 — fixed misleading "user can resume manually by sending any message"
  docstring (resume is triggered by re-running the workflow command,
  not by an arbitrary message).

Skipped as out-of-scope:
  S3 — cancelWorkflowRun rowCount check (pre-existing defect; separate PR)
  S4 — tightening expect.anything() to UUID regex (deferred)
  S8 — 12-positional-arg executeWorkflow → options-bag refactor
    (tracked follow-up)

bun run validate green locally; 68 tests in api.workflow-runs.test.ts
(up from 67), 173 in dag-executor.test.ts (up from 166).

* review: close I1/I2/I3/I4/I6 — paused tolerance in loop + emitter, resume test, useId

I1 (loop inter-iteration check) — dag-executor.ts:1715
  Used `!== 'running'` in the loop node's between-iteration status check.
  A sibling approval node pausing the run in the same topological layer
  would abort the loop mid-iteration with "Loop node '<id>' stopped at
  iteration N (paused)". Switched to the shared shouldContinueStreamingForStatus
  helper so paused is tolerated — same semantics the streaming check got.
  Extended inline comment explains the sibling-layer concurrency reason.

I2 (skipIfStatusChanged emitter unregister) — dag-executor.ts:2886
  At DAG-finalization writes the helper correctly skipped writing on any
  non-running state (paused included — don't mark a paused run complete),
  but it *also* called getWorkflowEventEmitter().unregisterRun() which
  broke SSE observability for a run that's still live (waiting for user
  approval). Split the two responsibilities: skip the write for all
  non-running states, but only unregister the emitter for terminal states
  (cancelled / deleted / completed / failed). `paused` keeps the emitter
  registered so resume stays visible on the dashboard.

I3 (foreground_resume_detected branch untested) — orchestrator-agent.test.ts
  That branch was modified as part of the original fix (added
  parentConversationId as 11th positional arg) but no existing test
  configured mockFindResumableRunByParentConversation to return non-null.
  A positional mistake (e.g. accidentally swapping issueContext and
  parentConversationId) would silently break auto-resume with no failing
  test. New regression test configures the mock, asserts both the cwd
  comes from the resumable run's working_path AND parentConversationId
  is passed correctly at position 10.

I4 (null-parent log level) — api.ts tryAutoResumeAfterGate
  `getConversationById` returning null is a data-integrity signal (the
  parent conversation was deleted while the run was paused) — worth
  surfacing at info level so operators notice, not hiding at debug.
  Missing platform_conversation_id on an existing row would be an unusual
  DB state and stays at debug. Added `parentDeleted: boolean` to the log
  context so the two cases are distinguishable in observability.

I6 (hardcoded DOM id) — ConfirmRunActionDialog.tsx
  `id="confirm-run-action-reason"` collided when multiple dialog instances
  share the same page (Radix portals mitigate in practice but the code
  was fragile). Switched to React.useId() so each instance gets a unique
  id — htmlFor/id wiring preserved.

S11 (arity-only assertion) — orchestrator-agent.test.ts:1092 area
  The interactive-workflow-on-web test asserted mockExecuteWorkflow was
  called, but nothing about the args. Added a specific assertion that
  position 10 (parentConversationId) equals 'conv-1' (the caller
  conversation id) — pins the wiring that I1/I2 depend on being correct.

Deferred (from review S1-S10, I5, I7):
  - S1 (ExecuteWorkflowOptions bag) — tracked as standalone follow-up;
    12 positional args with 2 adjacent optionals is a real maintenance
    hazard but the refactor deserves its own PR.
  - S7 (WHY comment on non-web else branch) — review text says the branch
    "correctly omits" parentConversationId but the code passes it; the
    combination with the web-parent guard in tryAutoResumeAfterGate is
    intentional. Not adding a justify-what-we-don't-do comment.
  - S2/S3/S4/S5/S8/S9/S10 — pure polish (event-map ternary, platformConvId
    inlining, shared constant for REJECTION_REASON_INPUT, onChange arrow
    shorthand, discriminated union, docblock trim, suffix comment drop)
  - I5 (soften "Resuming workflow." to "— check the dashboard for progress")
    — users clicking from the dashboard are already on the dashboard; the
    current text is accurate (enqueue completed) and concise.
  - I7 (test dispatch-throws path) — covered implicitly by the try/catch
    branch of tryAutoResumeAfterGate returning false; a direct test would
    require mocking handleMessage to throw and would couple to
    dispatchToOrchestrator internals.

bun run validate green; 189 dag-executor tests, 98 orchestrator-agent
tests, 68 api.workflow-runs tests — all the new cases pass.

---------

Co-authored-by: Jonas Vanderhaegen <7755555+jonasvanderhaegen@users.noreply.github.com>
Bun 1.3.11's bytecode generation produces broken output for our
module graph ("TypeError: Expected CommonJS module to have a function
wrapper" at runtime, reproduces natively on darwin-arm64 too). The
compile already fails with "Failed to generate bytecode for ./cli.js"
and proceeds without it, but the resulting binary still crashes at
module instantiation.

Only --minify remains. Binary size is unchanged in practice
(bytecode was being skipped after the error anyway), but now the
build succeeds cleanly and the binary runs.

Surfaced while building the v0.3.7 release.
…failure recovery

Two gaps caused the v0.3.7 release to ship an empty GitHub Release and break
install.sh for all users for ~30 min. Both are now handled in the skill:

1. Step 1.5 — mandatory pre-flight `bun build --compile` smoke test before
   any user-visible step (version bump, PR, tag). Compiles the native target
   only (~15-30s), runs `archon version`, greps the output for runtime-error
   markers, aborts if anything fails. This would have caught both v0.3.7
   bugs (the --bytecode flag producing broken bytecode, and the Pi SDK's
   readFileSync of a path only present in node_modules) before the tag
   was ever pushed. Escape hatch: fix the underlying bug on a feature
   branch and re-run /release — no workaround to "just skip".

2. Step 10 now detects deterministic CI failure (two reruns with same
   error) and jumps to a new "Recovery: deterministic release CI failure"
   section instead of looping until timeout. Recovery documents:
     - Delete the empty GitHub Release (keep the tag — immutability)
       so releases/latest falls back to the prior working version and
       install.sh stops 404-ing within seconds.
     - Diagnose via `gh run view --log-failed`, with a cheat-sheet of
       common failure modes (bundler bytecode, module-init crashes,
       Windows timeouts).
     - Cut the NEXT patch version with the fix — never reuse the broken
       tag, never force-push, never upload binaries built from a
       different SHA onto the original release.
     - CHANGELOG note template that references the broken prior version
       so the audit trail is clear for anyone inspecting tags later.

Also added two Important Rules:
  - Never skip Step 1.5. The ~30s cost keeps failure modes local.
  - If Step 1.5 fails, abort the release. No "skip and hope CI passes."

Skill grows from 389 → 548 lines. No changes to Steps 2–9, 11, 12.
…coleam00#1355)

* fix(providers/pi): lazy-load Pi SDK to unbreak compiled archon binary

Pi's @mariozechner/pi-coding-agent/dist/config.js runs
`readFileSync(getPackageJsonPath(), 'utf-8')` at module top level. Inside
a compiled archon binary `getPackageJsonPath()` resolves to
`dirname(process.execPath) + '/package.json'`, which doesn't exist next
to `/usr/local/bin/archon` — so archon crashes with ENOENT at startup
before any command runs. v0.3.7's release binary build appeared to
compile clean (CI fell over first on an unrelated --bytecode issue) but
even the fixed-bytecode binary fails the same way locally.

Convert all Pi SDK value imports and Pi-dependent helper imports to
dynamic imports inside `PiProvider.sendQuery()`. Type-only imports stay
static (erased by TS). Effect: registering the Pi provider and creating
an instance no longer loads Pi's SDK — load happens only when a Pi
workflow actually runs. Claude and Codex providers keep their static
import style (their SDKs have no module-init side effects that fail in
a binary); see the file header comment in provider.ts for why Pi is the
deliberate outlier.

Class constructors (AuthStorage, ModelRegistry, SettingsManager) are
accessed via `piCodingAgent.X` rather than destructured to keep
eslint's naming-convention rule happy without a disable.

Add a regression test (provider-lazy-load.test.ts) that mocks
@mariozechner/pi-coding-agent and @mariozechner/pi-ai, walks the same
registerCommunityProviders() → getAgentProvider('pi') path the CLI and
server take, and asserts neither SDK module was loaded. Runs in its own
`bun test` invocation (mock.module is process-wide).

Verified locally: `bun build --compile --minify --target=bun-darwin-arm64`
produces a binary whose `archon version` runs cleanly and reports
Build: binary, where previously every command crashed at boot.

* address review: changelog, docstring, type tighten, Theme type-only import

From the multi-agent review on coleam00#1355:

- Add CHANGELOG.md entries under [Unreleased] ### Fixed for this PR's Pi
  lazy-load fix and the --bytecode removal from coleam00#1354 — both user-visible
  fixes (compiled binary unusable without them).
- Rewrite the test-header docstring in provider-lazy-load.test.ts to
  describe counter-based detection instead of "mocks throw" (contradicted
  the actual code directly below it).
- Tighten `lookupPiModel`'s first parameter from `unknown` to a local
  `GetModelFn` alias, moving the runtime-string cast to the single call
  site with a pointer to the docblock.
- Update the class docblock on `PiProvider` — "v1 capabilities are all
  false" was stale; PI_CAPABILITIES has seven flags set to true.
- `ui-context-stub.ts` imports `Theme` as a non-type value even though
  every usage is in a type position. Fold it into the existing
  `import type {…}` block so a future runtime-class `Theme` in Pi can't
  reintroduce an eager module load via this sibling.

No behavior change. Type-check, lint, format, tests, and a local
darwin-arm64 compile + version smoke all clean.
…leam00#1357)

The two "Smoke-test Claude binary-path resolver" steps in
.github/workflows/release.yml run `archon workflow run archon-assist
"hello"` against a fresh `git init` temp repo with no origin. As of
coleam00#1310's worktree-policy changes, default isolation auto-syncs the
worktree with origin before creating it, which fails with
"neither origin/HEAD nor origin/main exist" — hit before Claude's
resolver is even reached, so the test assertions ("Claude Code not
found", "CLAUDE_BIN_PATH") never match and the linux-x64 build
aborts the whole release matrix.

The tests exercise the Claude resolver path, not worktree setup, so
--no-worktree is the correct fix: it short-circuits
validateAndResolveIsolation and skips the origin sync entirely.
Matches the documented usage in CLAUDE.md
(`archon workflow run quick-fix --no-worktree "Fix typo"`).

Surfaced while cutting v0.3.8 — the release CI failed deterministically
on both builds. Binaries themselves are fine (the v0.3.7 Pi-lazy-load
fix works; local pre-flight passed on --help). v0.3.8's GitHub Release
has been deleted so `releases/latest` falls back to v0.3.6; next
release will be v0.3.9 with this fix.
Wirasm and others added 30 commits April 28, 2026 11:25
…oleam00#1460)

Both SDKs were ~30 patch releases behind. Validation suite passes
(type-check, lint, format, tests across all 10 packages) without code
changes. The only sustained Claude SDK behavior change in the range —
v0.2.111's options.env overlay/replace flap, since reverted to overlay —
is a no-op for Archon, which already passes { ...process.env } as the
SDK env.
…oleam00#1461)

* fix(claude): stop passing --no-env-file to native binary in dev mode

The Claude Agent SDK switched from shipping `cli.js` inside the package
to per-platform native binaries via optional deps somewhere in the
0.2.x series. As of 0.2.121 there is no `cli.js` in the SDK package;
dev mode resolves to `@anthropic-ai/claude-agent-sdk-darwin-arm64/claude`
(Mach-O). That native binary rejects `--no-env-file` with
`error: unknown option '--no-env-file'` and the subprocess exits 1.

`shouldPassNoEnvFile` was returning true on `cliPath === undefined` on
the assumption that "dev mode = JS executable run via Bun". That
assumption is dead. Tighten the predicate to only return true on an
explicit `.js` suffix, so we only emit the flag when the SDK is going
to spawn a Bun-runnable script.

CWD `.env` leak protection is unaffected. `stripCwdEnv()` in
`@archon/paths` (coleam00#1067) deletes Bun-auto-loaded `.env`/`.env.local`/
`.env.development`/`.env.production` keys from `process.env` at every
Archon entry point before any subprocess is spawned. The native Claude
binary does not auto-load `.env` from its cwd either. `--no-env-file`
was belt-and-suspenders for the JS-via-Bun case only.

Verified end-to-end with a sentinel: added a unique
`ARCHON_LEAK_SENTINEL_$$` to Archon's `.env`, ran e2e-claude-smoke
with a bash probe checking the subprocess env. stderr shows
`[archon] stripped 23 keys from /Users/rasmus/Projects/cole/Archon
(.env, .env.local)` — sentinel was deleted. Bash node prints
`PASS: simple='4', no sentinel leak`. Workflow completes cleanly,
no `--no-env-file` rejection from the SDK binary.

bun run validate: green across all 10 packages.

* fix(claude): address review on coleam00#1461 (stale docs + test gaps)

Critical: file-level JSDoc at provider.ts:18 still claimed dev mode
resolves cli.js. Updated to reflect SDK 0.2.x's switch to per-platform
native binaries.

Important: security.md still listed --no-env-file as item 2 of
target-repo .env isolation. Scoped that bullet to legacy
Bun-runnable JS entry points and called out that native binaries
don't auto-load .env from cwd. Added an Unreleased Fixed entry to
CHANGELOG.md. Updated binary-resolver.ts JSDoc title that referenced
cli.js.

Polish: widened the predicate to accept .mjs and .cjs (also
Bun-runnable JS — matches the SDK's own internal extension list).
Dropped the redundant `passesNoEnvFile` log field that mirrored
`isJsExecutable`. Added unit cases for .mjs/.cjs (now true) and
.ts/.tsx/.jsx (deliberately false — never SDK entry points).

Added an integration test that mocks resolveClaudeBinaryPath to
return a .js path and asserts executableArgs: ['--no-env-file']
flows through buildBaseClaudeOptions all the way to the SDK call —
catches future regressions in the conditional spread.

bun run validate: green across all 10 packages.
* refactor(workflows): trust the SDK for model validation

Drops cross-provider model inference and hard-coded model allow-lists.
The string a workflow author writes in `model:` is forwarded to the SDK
unchanged; the SDK and its API decide whether the model exists. Provider
identity is the only thing Archon validates at load time — typos like
`provider: claud` are caught early; everything else fails at runtime
through the SDK's normal error path.

Why this matters: a recent run on Sasha showed `provider: claude` +
`model: opus[1m]` getting silently routed to Codex (because Codex's
isModelCompatible was defined as the complement of Claude's, so anything
not literally `sonnet|opus|haiku` matched). Codex then rejected the model
as a `⚠️` system warning and the node "completed" in 2.1 seconds with
empty output, after which the workflow opened a hallucinated PR. Three
stacked bugs and two amplifiers; this commit removes all five.

Changes:

- Delete model-validation.ts entirely (inferProviderFromModel and
  isModelCompatible are gone). Drop the matching field from
  ProviderRegistration and from the claude/codex/pi entries.
- Replace the resolver in executor.ts and dag-executor.ts (both the
  per-node and per-loop paths) with a flat
  `node.provider ?? workflow.provider ?? config.assistant`. Model never
  influences provider selection; load-time validation is just
  isRegisteredProvider on the resolved provider id.
- Remove the dag-node Zod superRefine that recomputed model-compat —
  load-time provider validation moved to loader.ts.
- Codex provider: stream loop now matches Claude's contract. error
  events that aren't followed by turn.completed yield
  `result.isError: true` (subtype `codex_stream_incomplete`) so the
  dag-executor's existing isError path catches them. turn.failed
  becomes `codex_turn_failed` with the same shape. Iterator close
  without a terminal event is itself a fail-stop. MCP-client errors
  remain filtered (Codex retries those internally).
- dag-executor: AI nodes that exit the streaming loop with empty
  assistant text and no structured output now fail with
  `dag.node_empty_output` instead of completing silently — the Sasha
  bug's final amplifier. Bash/script/approval nodes are unaffected.

Tests: model-validation.test.ts and isPiModelCompatible block deleted;
codex provider tests rewritten to assert the new fail-stop contract;
dag-executor empty-output test flipped to assert failure; new tests
cover (a) loader rejecting unknown provider, (b) loader accepting any
model string with a known provider, (c) executor passing
provider+model through without re-routing, (d) executor throwing on
unknown provider, (e) Codex synthesizing fail-stop on iterator close.
Two cost-tracking tests adjusted to yield non-empty assistant text
since their intent was cost accumulation, not empty-output handling.

bun run validate: green (check:bundled, type-check, lint
--max-warnings 0, format:check, all packages' test suites — 0 fail).

End-to-end smoke (.archon/workflows/test-workflows/):
- e2e-deterministic: PASS (engine healthy)
- e2e-codex-smoke: PASS (Codex sendQuery + structured output work)
- e2e-claude-smoke: FAIL with `error: unknown option '--no-env-file'`
  — this is a regression from the SDK 0.2.121 bump (coleam00#1460), not from
  this redesign. The Claude provider source is unchanged on this
  branch. To be fixed separately.

* fix(workflows): address review on coleam00#1463

Critical:
- C1: empty-output guard now skips idle-timeout completions. The on-screen
  message says "completed via idle timeout"; flipping that to a failure
  contradicted the user-facing log. Added !nodeIdleTimedOut to the guard.
- C2: per-node provider identity is now validated at YAML load time.
  Loader iterates dagNodes after parsing and rejects any unknown
  provider id with "Node 'X': unknown provider 'Y'. Registered: ...".
  The dag-executor's runtime check stays as defense-in-depth.

Important:
- I1: CHANGELOG entry under [Unreleased] > Changed describing the
  resolver redesign + an explicit migration line for workflows that
  relied on cross-provider model inference.
- I2: restored the dropped mockLogger.error('turn_failed') assertion in
  the turn.failed-without-error-message test.
- I3: empty-output test now also asserts store.failWorkflowRun was
  called, matching the parallel error_max_budget_usd test pattern.
- I4: new test that proves a node yielding zero assistant text but a
  valid structuredOutput is treated as a successful completion (not
  caught by the empty-output guard).
- I5: rewrote the post-loop comment in codex/provider.ts to be precise
  about which dag-executor branch catches the synthesized result chunk
  (the throwing msg.isError branch, distinct from the empty-output
  guard's { state: 'failed' } return).
- I6: removed PR-era "redesign" / "Sasha workflow" references from
  three test-file comments.
- I7: docs sweep for the deleted isModelCompatible field — six files
  updated (CLAUDE.md, two docs guides, quick-reference, contributing
  guide, architecture reference).

Polish:
- S3: dropped the dead sawTerminal flag in streamCodexEvents — both
  terminal branches `return`, so reaching the post-loop block always
  means no terminal fired. Pure simplification.
- S4: dropped parsePiModelRef and PiModelRef from community/pi/index.ts
  exports. The parser is consumed only by Pi's provider.ts; making it
  package-internal narrows the public surface.
- S6: new Codex test for the bare-stream-close case (zero events,
  iterator just ends) — locks in the default fallback message used
  when no captured non-MCP error is available.
- S7: new dag-executor test for per-node unknown-provider at runtime.
  Bypasses the loader to exercise resolveNodeProviderAndModel's throw,
  asserts the node_failed event carries the "unknown provider 'claud'"
  detail (the workflow-level fail message is a generic summary).

bun run validate green across all 10 packages.

* fix(workflows): address CodeRabbit review on coleam00#1463

Two real issues from CodeRabbit's automated pass on db95e8a:

1. Empty-output fail-stop now applies to loop iterations too. The
   single-shot AI-node guard at executeNodeInternal only covered
   prompt/command nodes; executeLoopNode has its own streaming path,
   so a provider that closed cleanly with zero content could pause an
   interactive loop with a blank gate or burn the full max_iterations
   budget. Mirrors the contract of the single-shot guard:
   `fullOutput.trim() === '' && !iterationIdleTimedOut` fails the
   iteration with a `loop_iteration_failed` event carrying a clear
   error. Idle-timeout exits remain exempt for the same reason as
   single-shot nodes — the on-screen "completed via idle timeout"
   message would otherwise contradict the failure.

2. Unknown loop providers now throw instead of return-failed. The
   early-return path bypassed the layer dispatch's outer catch at
   line 2870, so loop nodes with an invalid per-node `provider:`
   field skipped the standard `node_failed` event, the user-facing
   message, and the pre-execution log entry. Throwing reuses the
   common failure path — same shape as resolveNodeProviderAndModel
   uses for non-loop nodes.

Both align with CLAUDE.md's "fail fast, explicit errors, never silently
swallow" principle. The third CodeRabbit finding (boundary violation
for `@archon/providers` import in loader.ts) is consistent with
existing precedent — `dag-executor.ts`, `executor.ts`, and
`validator.ts` already import from the same path; the runtime contract
(every entrypoint bootstraps the registry before parseWorkflow runs) is
already enforced in tests and documented at `loader.test.ts:31`.

bun run validate green across all 10 packages.
… crash on missing source (coleam00#1394)

The 18 top-level `import … with { type: 'text' }` statements in
`bundled-skill.ts` resolve at module load. For `bun build --compile` that's
build time, so the binary embeds the strings and works regardless of any
on-disk skill files. For `bun link` (linked-source) installs that's every
`archon` invocation — including `archon --help`, which doesn't even use the
skill content. If any of the 18 source files are missing or moved, the
import fails and the CLI cannot start at all.

The skill content is data the binary deploys via `archon setup`, not data
the CLI needs at runtime. There's only one consumer in production code:
`copyArchonSkill()` in `setup.ts`. Moving the import into that function as
a dynamic import preserves the compiled-binary behavior (Bun's bundler
statically analyses literal-string `import()` and embeds the chunk —
verified by grepping the SKILL.md frontmatter out of a freshly compiled
binary) while making the linked-source install resilient: only `archon
setup` triggers the bundled-skill module load now. Verified: a known skill
string appears in the compiled binary 1×, and `archon --help` no longer
needs the source files to start.

`copyArchonSkill()` becomes async because the dynamic import is a Promise.
The single production call site is already in an async function and gets
an `await`. The four `setup.test.ts` cases become async too.
…art (coleam00#1307)

* fix(docker): register safe.directory for all repos on bind-mount restart (coleam00#1279)

On macOS bind mounts (VirtioFS), host UIDs do not map to the
container appuser (1001). Git 2.35.2+ rejects operations with
"dubious ownership". The Dockerfile RUN-layer gitconfig is not
inherited by bind mounts on restart, and worktree paths are
unknown at build time.

Add a find loop in docker-entrypoint.sh that dynamically registers
every .git directory under /.archon as a safe directory after the
chown block. This is idempotent and handles worktrees at any depth.

* chore: add -prune to avoid scanning .git internals
…mat persist (coleam00#1480)

The original maintainer-standup workflow relies on Claude SDK-enforced
output_format (one big JSON wrapping the brief markdown + state). It works on
Claude but hangs in nested Claude Code sessions (coleam00#1067), and Pi/Minimax can't
reliably emit a 30KB JSON wrapper around markdown content.

Changes:
- Drop output_format from synthesize node. Synthesis now emits brief
  markdown plain, followed by a delimited ARCHON_STATE_JSON_BEGIN/END
  block.
- Replace persist (script:) with bash: that pipes synth output into a new
  .archon/scripts/maintainer-standup-persist.ts. The script tries the
  delimiter format first and falls back to the legacy
  {brief_markdown, next_state} JSON wrapper Pi/Minimax tends to emit
  regardless of prompt instructions. Either format yields the same brief
  + state.json on disk.
- Add maintainer-standup-minimax.yaml — a 1:1 copy with provider: pi,
  model: minimax/MiniMax-M2.7 — for use when the Claude variant hangs or
  when the maintainer wants to spend Pi tokens instead.
- Update the maintainer-standup synthesizer prompt: top-of-file output
  format directive, explicit "do not wrap in JSON object" guidance,
  delimiter-based example.

Tested end-to-end on the Minimax variant against today's PR/issue payload;
brief was extracted via the JSON-wrapper fallback path and written to
.archon/maintainer-standup/briefs/2026-04-29.md.
…nses (coleam00#1440)

* fix(providers/pi): tolerate prose preamble in structured-output responses

Pi has no SDK-level JSON-mode parameter and Minimax's Anthropic-compat
proxy doesn't expose response_format support, so reasoning models like
Minimax M2.7 routinely "think out loud" before emitting the JSON we
asked for, e.g.:

  "Now I have all the inputs. Let me evaluate the three gates:
   **Gate A — Direction alignment**: ...
   {\"verdict\":\"review\",\"direction_alignment\":\"aligned\",...}"

The previous tryParseStructuredOutput stripped fences then JSON.parse'd
the whole string, which fails the moment a single character of prose
appears before the {. The maintainer-review-pr workflow's gate node was
hitting this on ~70% of runs, propagating to 7-of-10 silent failures
in a sequential review batch this afternoon.

Add two preamble-tolerant fallback tiers:

  Tier 1 (existing): clean JSON.parse — fast path for compliant models
  Tier 2 (new):      scan backward to last `{`, parse from there — flat
                     JSON with reasoning preamble (Minimax M2.7 case)
  Tier 3 (new):      scan forward to first `{`, parse from there —
                     nested JSON with preamble (Tier 2 lands inside a
                     child object and fails)

The existing "trailing-prose-too" failure case (preamble + JSON +
postamble) still returns undefined — handling it would require brace-
depth tracking and isn't worth the cost. The two new tiers cover the
failure mode actually observed in production.

Tested against the real Minimax preamble pattern and a synthetic
preamble + nested-JSON case. 39 event-bridge tests pass.

No SDK changes, no Pi extensions, no Minimax API dependencies. Pure
Archon-side parser hardening — backward-compatible and benefits every
verbose-preamble model routed through Pi, not just Minimax.

* fix(providers/pi): address CodeRabbit review on coleam00#1440

- Drop the redundant `firstBrace !== lastBrace` guard on Tier 3. When
  the only `{` past position 0 is the same one Tier 2 just tried, Tier
  3 will re-run JSON.parse on the same input and fail identically — one
  redundant call accepted for simpler control flow. (CodeRabbit cleanup)

- Tighten the Tier 1 comment to drop the model name list (will date)
  and tighten the Tier 2 comment to scope its claim to "preamble +
  flat JSON" rather than overgeneralized "nested JSON". (CodeRabbit
  comment-quality cleanup)

- Replace the bare `// Fall through...` comments with `// fall through`
  inside the catch blocks. Necessary because eslint's `no-empty` rule
  flags bare `catch {}` blocks. (lint compliance)

- Add a regression test for `{` inside a string value that pins the
  Tier 2 → Tier 3 cascade composition: lastIndexOf lands inside the
  string brace, JSON.parse rejects the resulting `{ inside","ok":true}`
  fragment, Tier 3 forward-scans to the JSON object's outer `{` and
  parses cleanly. Note: the preamble must not itself contain `{`,
  otherwise Tier 3 lands on it instead of on the JSON object — covered
  in the test comment so the constraint isn't lost. (CodeRabbit test
  coverage)

- Update `packages/docs-web/src/content/docs/getting-started/ai-assistants.md:378`
  — the previous "degrades cleanly when the model emits prose" claim is
  now partially false (preamble-only patterns are recovered). New text
  enumerates the three handled forms (bare, fenced, prose-preamble)
  and clarifies the trailing-text-interleaved case still degrades.
  (CodeRabbit docs-impact)

- Add a CHANGELOG entry under `## [Unreleased] / Fixed`.

CodeRabbit's "critical" suffix-check suggestion (`cleaned.slice(brace)
.trimEnd().endsWith('}')`) is not applied. JSON.parse already rejects
trailing non-whitespace, so any successful parse from a `{`-prefixed
slice already ends with `}` after trim — the check is tautological in
all cases the function actually reaches it. The actual concern (a
self-contained `{...}` fragment in preamble that happens to be valid
JSON, with no real answer in the response) wouldn't be caught by the
suffix check either, since the fragment IS a valid JSON object that
ends with `}`. Real defense against that case would require schema
validation, which lives at the call site (executor's
`structured_output_missing` warning path), not in this parser.

Tests: 40 pass / 0 fail. Lint, format, type-check all clean.

* fix(providers/pi): drop Tier 2 backward-scan to avoid brace-postamble footgun

CodeRabbit flagged a real correctness issue: a backward scan from the last
`{` silently returns the wrong JSON object when the response contains a
brace-bearing example after the real payload (e.g. `{"actual":1}\n
For example: {"x":2}` parses the trailing example instead of failing).
This breaks the conservative-failure contract callers rely on.

Tier 2 was always strictly worse than Tier 3 anyway: every preamble pattern
Tier 2 handled, Tier 3 handles too, and Tier 3 doesn't have the
multi-fragment hazard. Dropping Tier 2 entirely removes the footgun and
keeps the parser simple (clean parse → forward scan → undefined).

Tests updated:
- Renamed/clarified existing tests to reference the surviving forward-scan
  tier instead of the deleted backward-scan tier (no behavior change for
  the cases they cover — preamble has no extra braces).
- Added a regression test for the brace-postamble footgun: input with a
  trailing example like `{"actual":"value"}\nFor example: {"verdict":"review"}`
  must return undefined under the new contract.
…m00#1389) (coleam00#1393)

* fix(workflows): concise failure messages for bash/script nodes (coleam00#1389)

When a `bash:` or `script:` node fails, the executor was embedding
`err.message` verbatim into the user-visible error. For inline scripts run
via `bash -c <body>` / `bun -e <body>`, Node's `promisify(execFile)` puts
the entire substituted script body into `err.message`, `err.cmd`, and the
first line of `err.stack` — and the Pino log serialized all three, repeating
the body ~3× in one structured log line. The actionable diagnostic was
buried under kilobytes of echoed source.

Route both catch-block default branches through a new
`formatSubprocessFailure(err, label)` helper in `executor-shared.ts` that:

- strips the `Command failed: <cmd>` prefix line (which carries the body)
- prefers `err.stderr` — Bun/bash emit the actionable diagnostic there
- tail-truncates at 2 KB with a `…[truncated]` marker
- returns a controlled `logFields` subset (`exitCode`, `killed`, `stderrTail`)
  so Pino never re-serializes `err.message` / `err.stack` / `err.cmd`

Also drops the script-node `stderrHint` concatenation — stderr is already
handled by the helper, so the previous code appended it twice.

Timeout / ENOENT / EACCES branches are preserved verbatim.

Fixes coleam00#1389

* fix(workflows): address PR review feedback for coleam00#1389

- Run formatSubprocessFailure unconditionally so timeout / ENOENT / EACCES
  branches also get sanitized log fields (the timeout message also embeds
  the `Command failed: bash -c <body>` line).
- Drop `errType: err.constructor.name` (always 'Error' in production) and
  replace with `nodeType: 'bash' | 'script'` for actual discriminating value.
- Replace chained `||` + ternary in diagnostic selection with explicit
  if/else for readability.
- Simplify exit suffix guard: `err.code != null` instead of double typeof.
- Make stderrTail emptiness check explicit.
- Drop `RawSubprocessError` export (no external consumer) and widen `code`
  to `number | string | null` to mirror Node's ExecFileException.
- Tighten test length bound to <2100 (was <4000 / <2200) so a doubling of
  SUBPROCESS_ERROR_MAX_CHARS would actually trip the assertion.
- Replace broad regex assertion with `.toContain('[eval]')` — the location
  marker is the strongest signal that diagnostic content survived.
- Strip issue-number citations from describe block and test comments.
- Update script-nodes.md to distinguish stderr behavior on success vs
  failure paths.
- Add CHANGELOG Unreleased / Fixed entry for the user-visible change.
…o prevent infinite failure loop (coleam00#1294)

* fix(orchestrator): clear stale session ID on error_during_execution to prevent infinite failure loop

When a Claude API session expires (e.g. after container restart), the orchestrator
persists the new (failed) session ID from the error result, causing every subsequent
message in that conversation to hit the same error — an infinite failure loop.

Fix: on error_during_execution result, set assistant_session_id to NULL instead of
persisting the failed session ID. The next message starts a fresh session with full
context rebuilt from the DB. Conversation history is unaffected since it lives in
remote_agent_messages, independent of the Claude session.

Changes:
- updateSession() and tryPersistSessionId() now accept string | null
- Both handleStreamMode and handleBatchMode clear session ID on error_during_execution

Fixes coleam00#1280

* test(orchestrator): add stale session clearing tests + address review feedback

Co-Authored-By: Claude Opus 4 (1M context) <noreply@anthropic.com>
Signed-off-by: kagura-agent <kagura.agent.ai@gmail.com>

---------

Signed-off-by: kagura-agent <kagura.agent.ai@gmail.com>
Co-authored-by: Claude Opus 4 (1M context) <noreply@anthropic.com>
coleam00#1481)

* fix(claude): honor CLAUDE_BIN_PATH in dev mode for libc-mismatch hosts

The Claude Agent SDK auto-resolves its bundled native binary in
[linux-x64-musl, linux-x64] order. On glibc Linux hosts (Ubuntu/Debian/
Fedora), Bun installs both via optionalDependencies and the musl variant
is picked first; its ELF interpreter (/lib/ld-musl-x86_64.so.1) does not
exist on glibc, so spawn fails and the SDK reports a misleading "binary
not found" — the file is on disk, the loader is not.

The documented escape hatch CLAUDE_BIN_PATH was dead code in dev mode:
the resolver early-returned undefined when BUNDLED_IS_BINARY=false before
ever reading the env var. The only workaround was patching node_modules.

Move the env-var block above the BUNDLED_IS_BINARY return. Config-file
path stays binary-mode-only — it's per-repo, not per-machine; env is the
right knob for libc mismatches.

Behavior preserved:
- env unset                  → unchanged (undefined in dev, autodetect/throw in binary)
- env set + file exists      → resolved (was binary-only; now also dev)
- env set + file missing     → clear error (was binary-only; now also dev)

Closes coleam00#1474

* chore(claude): address CodeRabbit review on coleam00#1481

- CHANGELOG entry under [Unreleased] / Fixed describing the dev-mode
  CLAUDE_BIN_PATH escape hatch (previously ignored). Notes that
  config-file path remains binary-mode-only and that env-loading +
  target-repo .env isolation are unchanged downstream.
- Empty-string test pinning that CLAUDE_BIN_PATH='' falls through
  to undefined rather than throwing — protects against a future
  predicate typo that would treat empty as "set".
- One-line note in ai-assistants.md "Binary path configuration"
  section pointing dev-mode users at the env-var override for the
  glibc/musl mismatch case.

Skipped from the review:
- The other two docs-page rewrites (configuration.md /
  troubleshooting.md): the error message itself names CLAUDE_BIN_PATH,
  and coleam00#1474 documents the use case publicly. One mention in
  ai-assistants.md is enough for discovery.
- Type-style consistency tweaks in the test file: pure bikeshed.
coleam00#1478)

The DAG-structure validator scans `node.when`, `node.prompt`, and
`loop.prompt` strings for `$nodeId.output` references. Prompt bodies
in builder-style workflows embed fenced and inline code as
documentation for the LLM (e.g. `archon-workflow-builder` shows how to
author a script node), and those literal `$<other-node>.output`
mentions were being treated as real cross-node references. Result:
`archon-workflow-builder` (a bundled default) failed to load, and
`bun run cli workflow run archon-workflow-builder ...` reported
"references unknown node '$other-node.output'".

Strip triple-backtick fenced blocks and single-backtick inline code
from prompt and loop.prompt before scanning. `when:` clauses are
JS-like expressions and never carry markdown code, so they pass through
unchanged. Real cross-node refs in prose continue to validate.

Also wraps one bare `$nodeId.output` mention in
`archon-workflow-builder.yaml` Rules section in inline backticks so it
reads as documentation alongside the surrounding `$nodeId.output`
mentions that already use this style.

Closes coleam00#1413
# Conflicts:
#	packages/providers/src/community/pi/event-bridge.ts
`refactor(workflows): trust the SDK for model validation` (bf1f471)
removed `isModelCompatible` from `ProviderRegistration` entirely; Pi got
the same treatment in its own registration. Mirror that for Copilot:
drop the field from `registerProvider({...})`, the helper function and
its export, and the registry-test block that exercised it. The
`getProviderInfoList` projection assertion stays as a forward guard,
matching the upstream Pi convention.

Reported in PR coleam00#1351 review by @danielscholl.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…iled-binary boot

Mirror the Pi precedent (coleam00#1355, v0.3.7). The Copilot SDK has not been
audited for module-load side effects, and Pi's symptom — file I/O at
import time crashing compiled Archon binaries with ENOENT before any
command runs — is a class of bug that doesn't surface in dev mode. The
defensive shape:

  - Top-of-file imports converted to type-only.
  - `CopilotClient` and `approveAll` value bindings dynamic-imported once
    at the start of `sendQuery()`'s inner async block. Module-namespace
    binding (`copilotSdk.CopilotClient`) avoids fighting the camelCase
    naming-convention lint rule on local variables.
  - `approveAll` threaded into `buildSessionConfig()` as a parameter so
    no helper has to dynamic-import the SDK again.

Adds `provider-lazy-load.test.ts` (its own `bun test` invocation, like
Pi's, because `mock.module` is process-wide). The test walks the same
`registerCommunityProviders()` path the CLI and server take and asserts
the SDK module never resolved.

Reported in PR coleam00#1351 review by @danielscholl.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… tokens

Previously any of `COPILOT_GITHUB_TOKEN` / `GH_TOKEN` / `GITHUB_TOKEN`
in the resolved env would force `useLoggedInUser: false`, silently
overriding an explicit `useLoggedInUser: true` in the user's
`.archon/config.yaml`. The real-world case: a user who has `GH_TOKEN`
exported for the `gh` CLI on a *different* GitHub account than their
Copilot subscription cannot direct Copilot to their logged-in session.

Invert the precedence so explicit config wins:

  useLoggedInUser: copilotConfig.useLoggedInUser ?? !githubToken

Default behavior is unchanged when the user sets nothing — env token
present means `useLoggedInUser: false`, otherwise `true`. Locked in
with four test cases covering the cross-product of (env-token? × config?).

Reported in PR coleam00#1351 review by @danielscholl.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
)

- Add --base $BASE_BRANCH to gh pr create in archon-architect,
  archon-refactor-safely, and archon-implement-issue
- Add verify-pr-base bash node to all 9 PR-creating workflows
  that auto-corrects via gh pr edit if the AI mis-targets
- Rewire downstream depends_on edges through verify-pr-base
- Regenerate bundled-defaults.generated.ts
coleam00#1483)

* chore(deps): remove stale package-lock.json to clear Dependabot noise

This file was deleted in coleam00#85 (Bun migration) but accidentally re-committed
in coleam00#89 unrelated to that PR's actual fix. It hasn't been touched since
April and isn't used by anything (CI runs `bun install`), but Dependabot
keeps scanning it — every one of the 21 open alerts triaged in coleam00#1353 is
against this file, not bun.lock.

Removing it closes all 21 alerts. The axios `^1.15.0` override in
package.json stays — it's doing real work for the bun tree because
@slack/bolt pulls in a vulnerable axios transitively (CVE-2025-62718).

Add package-lock.json (and yarn/pnpm lockfiles) to .gitignore so this
can't silently slip back in.

Closes coleam00#1353

* chore(deps): patch four runtime CVEs in bun.lock via overrides

Targets coleam00#1353 alerts that resolve in the actual runtime tree (bun.lock),
not just the stale package-lock.json removed in the previous commit.

Added overrides:
- follow-redirects ^1.16.0 — auth-header leak on cross-domain redirect
  (GHSA-r4q5-vmmm-2653); via @slack/bolt
- path-to-regexp ^8.4.2 — DoS via sequential optional groups
  (CVE-2026-4926, CVE-2026-4923); via @slack/bolt + claude-agent-sdk
- qs ^6.15.1 — arrayLimit bypass DoS
  (CVE-2025-15284, CVE-2026-2391); via @slack/bolt
- flatted ^3.4.2 — prototype pollution in parse()
  (CVE-2026-33228); dev-only via eslint chain

bun audit confirms each resolves to a single non-vulnerable version
across the tree. bun run validate green. No code changes — purely
transitive bumps; we don't import any of these directly.

Skipped (require deeper triage): undici, lodash, picomatch — each has
multiple major versions resolved in the bun tree, so a single override
would force-downgrade other consumers.
Regression from coleam00#1481 (honor CLAUDE_BIN_PATH in dev mode). The CI workflow
set `CLAUDE_BIN_PATH: ~/.local/bin/claude` in YAML `env:` blocks; YAML does
not expand `~`, so the literal string was passed to the resolver.

Before coleam00#1481, dev mode silently ignored the env var and the SDK
auto-resolved its bundled binary — so the broken value was harmless. After
coleam00#1481, dev mode honors it, the file-existence check fails on the literal
`~`, and the smoke job aborts with "CLAUDE_BIN_PATH is set ... but the file
does not exist".

Move the env-var assignment into the run-step shell where `$HOME` resolves.
Both e2e-claude and e2e-mixed-providers jobs are affected.
* chore: update Homebrew formula for v0.3.9

* chore(release-skill): use --help (not version) for Step 1.5 smoke probe (#1359)

The pre-flight binary smoke does a bare `bun build --compile` — it
deliberately skips `scripts/build-binaries.sh` to stay fast. That means
packages/paths/src/bundled-build.ts retains its dev defaults, including
BUNDLED_IS_BINARY = false.

version.ts branches on BUNDLED_IS_BINARY: when true it returns the
embedded string; when false it calls getDevVersion(), which reads
package.json at `SCRIPT_DIR/../../../../package.json`. Inside a compiled
binary SCRIPT_DIR resolves under `$bunfs/root/`, the walk produces a CWD-
relative path that doesn't exist, and the smoke aborts with "Failed to
read version: package.json not found" — a false positive.

Hit during the 0.3.8 release attempt: the real Pi lazy-load fix was
working end-to-end; the smoke test was the only thing failing.

Use --help instead. It exercises the same module-init graph (so it still
catches the real failure modes the skill lists — Pi package.json init
crash, Bun --bytecode bugs, CJS wrapper issues, circular imports under
minify) but has no dev/binary branch, so no false positive.

Also add a longer comment block explaining why --help is preferred, so
this doesn't get "normalized" back to `version` by a future drive-by.

* chore(test-release-skill): preserve archon-stable across test cycles

The brew path of /test-release runs `brew uninstall` in Phase 5 to leave the
system in its pre-test state. For operators using the dual-homebrew pattern
(renamed brew binary at `/opt/homebrew/bin/archon-stable` so it coexists with
a `bun link` dev `archon`), that uninstall wipes the Cellar dir the
`archon-stable` symlink points into → `archon-stable` becomes dangling →
`brew cleanup` sweeps it away on the next brew op. Next time the operator
wants stable, they have to manually re-run `brew-upgrade-archon`.

Fix: make the skill aware of `archon-stable` and restore it transparently.

- Phase 2 item 4: detect the `archon-stable` symlink before any brew op;
  export `ARCHON_STABLE_WAS_INSTALLED=yes` so Phase 5 knows to restore it.
  Only triggers for the brew path (curl-mac/curl-vps don't touch brew so
  they leave `archon-stable` alone).
- Phase 5 brew path: after `brew uninstall + untap`, if the flag was set,
  re-tap + re-install + rename. Verifies the restored `archon-stable`
  reports a version and warns (non-fatal) if the rename target is missing.
  Documents the tradeoff: the restored version is "whatever the tap ships
  today", not necessarily the pre-test version — usually that's what the
  operator wants (the release they just tested becomes stable) but the
  back-version-QA case requires a manual `brew-upgrade-archon` after.
- Phase 1 confirmation banner now mentions that `archon-stable` will be
  preserved so the operator isn't surprised by the reinstall during Phase 5.

No changes to curl-mac/curl-vps paths. No changes to Phase 4 test suite.

* fix(providers/pi): install PI_PACKAGE_DIR shim so Pi workflows run in a compiled binary (#1360)

v0.3.9 made Pi boot-safe: lazy-loading its imports meant `archon version`
no longer crashed on `@mariozechner/pi-coding-agent/dist/config.js`'s
module-init `readFileSync(getPackageJsonPath())`. That's what the
`provider-lazy-load.test.ts` regression test guards.

The fix was only half the problem though. When a Pi workflow actually
runs, sendQuery() triggers the dynamic import — and Pi's config.js
module-init fires then, hitting the exact same ENOENT on
`dirname(process.execPath)/package.json`. Discovered by running
`archon workflow run test-pi` against a locally-compiled 0.3.9 binary:

    [main] Failed: ENOENT: no such file or directory,
           open '/private/tmp/package.json'
        at readFileSync (unknown)
        at <anonymous> (/$bunfs/root/archon-providertest:184:7889)
        at init_config

Boot-safe ≠ runtime-safe. The `/test-release` run for 0.3.9 passed
because it only exercised `archon-assist` (Claude); Pi was never
actually invoked on the released binary.

Fix: before the dynamic `import('@mariozechner/pi-coding-agent')` in
sendQuery, install a PI_PACKAGE_DIR shim. Pi's config.js checks
`process.env.PI_PACKAGE_DIR` first in its `getPackageDir()` and
short-circuits the `dirname(process.execPath)` walk. We write a
minimal `{name, version, piConfig:{}}` stub to
`tmpdir()/archon-pi-shim/package.json` (idempotent — existsSync check)
and set the env var. Pi only reads `piConfig.name`, `piConfig.configDir`,
and `version` from that file, all optional, so the stub surface is
genuinely minimal.

Localized to PiProvider: no global state, no mutation of any shared
config, no upstream fork. Claude and Codex providers are unaffected
(their SDKs don't have this class of module-init side effect).

Verified end-to-end: built a compiled archon binary with this patch,
ran `archon workflow run test-pi --no-worktree` (Pi workflow with
model `anthropic/claude-haiku-4-5`), got a clean response. Before the
patch, same binary crashed at `dag_node_started` with the ENOENT above.

Regression test added: asserts `PI_PACKAGE_DIR` is set after sendQuery
hits even its fast-fail "no model" path. Together with the existing
`provider-lazy-load.test.ts` (boot-safe) this covers both halves.

* feat(providers): autodetect canonical binary install paths for Claude and Codex (#1361)

Both binary resolvers previously stopped at env-var + explicit config and
threw a "not found" error when neither was set. Users who followed the
upstream-recommended install flow (Anthropic's `curl install.sh` for
Claude, `npm install -g @openai/codex`) still had to manually set either
`CLAUDE_BIN_PATH` / `CODEX_BIN_PATH` or the corresponding config field
before any workflow could run.

Add a tier-N autodetect step between the explicit config tier and the
install-instructions throw. Purely additive: env and config still win
when set (precedence covered by new tests). On autodetect miss, the same
install-instructions error fires as before.

Claude probe list (verified against docs.claude.com "Uninstall Claude
Code → Native installation" section):
  - $HOME/.local/bin/claude            (mac/linux native installer)
  - $USERPROFILE\.local\bin\claude.exe (Windows native installer)

Codex probe list (verified against openai/codex README; npm global-
install puts the binary at `{npm_prefix}/bin/<name>` on POSIX,
`{npm_prefix}\<name>.cmd` on Windows):
  - $HOME/.npm-global/bin/codex   (user-set `npm config set prefix`)
  - /opt/homebrew/bin/codex       (mac arm64 with homebrew-node)
  - /usr/local/bin/codex          (mac intel / linux system node)
  - %APPDATA%\npm\codex.cmd       (Windows npm global default)
  - $HOME\.npm-global\codex.cmd   (Windows user-set prefix)

Not probed (explicit override still required):
  - Custom npm prefixes — `npm root -g` would need a subprocess per
    resolve, too much surface for a probe helper
  - `brew install --cask codex` — cask layout isn't a PATH binary
  - Manual GitHub Releases extracts — placement is user-determined
  - `~/.bun/bin/codex` — not documented in openai/codex README

Pi provider intentionally has no equivalent change: the Pi SDK is
bundled into the archon binary (no subprocess), so there's no "binary"
to resolve. Pi auth lives at `~/.pi/agent/auth.json` which the SDK
already finds by default, and the PR A shim (`PI_PACKAGE_DIR`) handles
the package-dir case via Pi's own documented escape hatch.

E2E verified: removed both config entries from ~/.archon/config.yaml,
rebuilt compiled binary, ran `archon workflow run archon-assist` and a
Codex workflow. Logs showed `source: 'autodetect'` for both, responses
returned cleanly.

* fix(providers/test): use os.homedir() instead of $HOME in claude binary autodetect test

The native-installer autodetect test computed its expected path from
process.env.HOME, but the implementation uses node:os homedir(). On
Windows, HOME is typically unset (Windows uses USERPROFILE), so the
test fell back to '/Users/test' while the resolver returned the real
home dir — making the spy's path-equality check fail and breaking CI
on windows-latest.

Mirror the implementation by importing homedir() from node:os and
joining with node:path so the expected path matches the actual
platform-resolved home and separator.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(server): contain Discord login failure so it doesn't kill the server (#1365)

Reported in #1365: a user running `archon serve` with DISCORD_BOT_TOKEN
set but the "Message Content Intent" toggle disabled in the Discord
Developer Portal saw the entire server crash with `Used disallowed
intents`. Discord rejects the gateway connection (close code 4014) when
a privileged intent is requested without being enabled, and the
unguarded `await discord.start()` propagated the error all the way up,
taking the web UI down with it.

Wrap discord.start() in try/catch — log the failure with an actionable
hint (special-cased for the disallowed-intent error) and continue
running. Other adapters and the web UI come up regardless. The shutdown
handler already uses optional chaining (`discord?.stop()`) so nulling
discord after a failed start is safe.

Other adapters (Telegram, Slack, GitHub, Gitea, GitLab) have the same
unguarded-start pattern but are out of scope for this fix — addressing
them is tracked separately.

Also expanded the Discord setup docs with a caution callout that names
the exact error string and the new log event so users can grep for
both.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* docs(script-nodes): dedicated guide + teach the archon skill (#1362)

* docs(script-nodes): add dedicated guide and teach the archon skill how to write them

Script nodes (script:) have been a first-class DAG node type since v0.3.3 but
were documented only as one-liners in CLAUDE.md and a CI smoke test. Claude
Code reading the archon skill would see "Four Node Types: command, prompt,
bash, loop" and reach for bash+node/python one-liners instead of a proper
script node — losing bun's --no-env-file isolation, uv's --with dependency
pins, and the .archon/scripts/ reuse story.

- New packages/docs-web/src/content/docs/guides/script-nodes.md mirroring the
  structure of loop-nodes.md / approval-nodes.md: schema, inline vs named
  dispatch, runtime/deps semantics, scripts directory precedence (repo > home),
  extension-runtime mapping, env isolation, stdout/stderr contract, patterns,
  and the explicit list of ignored AI fields.
- guides/authoring-workflows.md and guides/index.md updated so the new guide is
  discoverable from both the node-types table and the guides landing page.
- reference/variables.md calls out the no-shell-quote difference between
  bash: and script: substitution — a subtle correctness trap when adapting a
  bash pattern into a script node.
- Sidebar order bumped +1 on hooks/mcp-servers/skills/global-workflows/
  remotion-workflow to slot script-nodes at order 5 next to the other
  node-type guides.

- .claude/skills/archon/SKILL.md: replaces stale "Four Node Types" (which
  also silently omitted approval and cancel) with the accurate seven, with a
  script-node code block showing both inline and named patterns.
- references/workflow-dag.md: full Script Node section covering dispatch,
  resolution, deps, stdout contract, and the list of AI-only fields that are
  ignored; validation-rules list updated.
- references/dag-advanced.md and references/variables.md: retry-support line
  corrected; no-shell-quote note added.
- examples/dag-workflow.yaml: added an extract-labels TypeScript script node
  and updated the header comment.

* fix(docs): review follow-ups for script-node guide

- skills example: extract-labels was reading process.env.ISSUE_JSON which is
  never set; use String.raw`$fetch-issue.output` so the upstream bash node's
  JSON is actually consumed
- guides/script-nodes.md + skills/workflow-dag.md: idle_timeout is accepted
  but ignored on script (and bash) nodes — executeScriptNode only reads
  node.timeout. Clarify that script/bash use `timeout`, not idle_timeout
- archon-workflow-builder.yaml: prompt enumerated only bash/prompt/command/loop,
  so the AI builder could never propose script or approval nodes. Add both
  (plus examples + rule about script output not being shell-quoted) and
  regenerate bundled defaults
- book/dag-workflows.md + book/quick-reference.md + adapters/web.md: fill in
  the node-type references that were missing script, approval, and cancel.
  adapters/web.md also overclaimed "loop" in the palette — NodePalette.tsx
  only drags command/prompt/bash, so note that the other kinds are YAML-only

* docs/skill: general hardening — fix inaccuracies, fill workflow/CLI/env gaps, add good-practices + troubleshooting (#1363)

* fix(skill/when): document the full `when:` operator set and compound expressions

The skill reference previously stated "operators: ==, != only" which is
materially wrong — the condition evaluator supports ==, !=, <, >, <=, >=
plus && / || compound expressions with && binding tighter than ||, plus
dot-notation JSON field access. An agent authoring a workflow from the
skill would think half the operators don't exist.

Replaces the single-sentence section with a structured reference covering:
- All six comparison operators (string and numeric modes)
- Compound expressions with precedence rules and short-circuit eval
- JSON dot notation semantics and failure modes
- The fail-closed rules in full (invalid expression, non-numeric side,
  missing field, skipped upstream)

Grounded in packages/workflows/src/condition-evaluator.ts.

* feat(skill): document Approval and Cancel node types

Approval and cancel nodes are first-class DAG node types (approval since the
workflow lifecycle work in #871, cancel as a guarded-exit primitive) but the
skill never described either one. An agent reading the skill and asked to
"add a review gate before implementation" or "stop the workflow if the input
is unsafe" would fall back to bash + exit 1, losing the proper semantics
(cancelled vs. failed, on_reject AI rework, web UI auto-resume).

Approval node coverage (references/workflow-dag.md, SKILL.md):
- Full configuration block with message, capture_response, on_reject
- The interactive: true workflow-level requirement for web UI delivery
- Approve/reject commands across all platforms (CLI, slash, natural
  language) and the capture_response → $node-id.output flow
- Ignored-fields list + the on_reject.prompt AI sub-node exception

Cancel node coverage (references/workflow-dag.md, SKILL.md):
- Single-field schema (cancel: "<reason>")
- Lifecycle: cancelled (not failed); in-flight parallel nodes stopped;
  no DAG auto-resume path
- The "cancel: vs bash-exit-1" decision rule (expected precondition miss
  vs. check itself failing)
- Two canonical patterns — upstream-classification gate, pre-expensive-step
  gate

Validation-rules list updated to enumerate approval/cancel constraints
(message non-empty, on_reject.max_attempts range 1-10, cancel reason
non-empty), plus a forward note that script: joins the mutually-exclusive
set once PR #1362 lands.

Placement in both files is after the Loop section and before the validation
section, so this commit stays additive with respect to PR #1362's Script
node insertion between Bash and Loop — rebase is clean.

* feat(skill): document workflow-level fields beyond name/provider/model

The skill's Schema section previously showed only name, description, provider,
and model at the workflow level — which is most of a stub. Agents asked to
"use the 1M-context Claude beta" or "run this under a network sandbox" or
"add a fallback model in case Opus rate-limits" had no way to discover
that any of these fields existed at the workflow level.

Adds a comprehensive Workflow-Level Fields section covering:
- Core: name, description, provider, model, interactive (with explicit
  callout that interactive: true is REQUIRED for approval/loop gates on
  web UI — a common footgun)
- Isolation: worktree.enabled for pin-on/pin-off (the only worktree field
  at workflow level; baseBranch/copyFiles/path/initSubmodules are
  config.yaml only, so a cross-reference points there)
- Claude SDK advanced: effort, thinking, fallbackModel, betas, sandbox,
  with explicit per-node-only exceptions (maxBudgetUsd, systemPrompt)
- Codex-specific: modelReasoningEffort (with note that it's NOT the same
  as Claude's effort — this has confused users), webSearchMode,
  additionalDirectories
- A complete worked example combining sandbox + approval + interactive

All fields cross-referenced against packages/workflows/src/schemas/workflow.ts
and packages/workflows/src/schemas/dag-node.ts.

* feat(skill/loop): document interactive loops and gate_message

Interactive loop nodes pause between iterations for human feedback via
/workflow approve — used by archon-piv-loop and archon-interactive-prd.
The skill's Loop Nodes section previously omitted both interactive: true
and gate_message entirely, so an agent writing a guided-refinement
workflow wouldn't know the feature exists or that gate_message is
required at parse time.

Adds:
- interactive and gate_message rows to the config table (marking
  gate_message as required when interactive: true — enforced by the
  loader's superRefine)
- A dedicated "Interactive Loops" subsection explaining the 6-step
  iterate-pause-approve-resume flow
- Explicit call-out that $LOOP_USER_INPUT populates ONLY on the first
  iteration of a resumed session — easy to miss and a common surprise
- Workflow-level interactive: true requirement for web UI delivery
  (loader warning otherwise) so the full-flow example is complete
- Note that until_bash substitution DOES shell-quote $nodeId.output
  (unlike script bodies) — called out since the audit surfaced this
  inconsistency

* fix(skill/cli): complete the CLI command reference with missing lifecycle commands

The CLI reference previously documented only list, run, cleanup, validate,
complete, version, setup, and chat — missing nearly every workflow
lifecycle command an agent needs to operate a paused, failed, or stuck
run. The interactive-workflows reference assumed these commands existed
without actually documenting them.

Adds full documentation for:
- archon workflow status — show running workflow(s)
- archon workflow approve <run-id> [comment] — resume approval gate
  (also populates $LOOP_USER_INPUT on interactive loops and the gate
  node's output when capture_response: true)
- archon workflow reject <run-id> [reason] — reject gate; cancels or
  triggers on_reject rework depending on node config
- archon workflow cancel <run-id> — terminate running/paused with
  in-flight subprocess kill
- archon workflow abandon <run-id> — mark stuck row cancelled without
  subprocess kill (for orphan-cleanup after server crashes — matches
  the #1216 precedent)
- archon workflow resume <run-id> [message] — force-resume specific
  run (auto-resume is default; this is for explicit override)
- archon workflow cleanup [days] — disk hygiene for old terminal runs
  (with explicit callout that it does NOT transition 'running' rows,
  a common confusion)
- archon workflow event emit — used inside loop prompts for state
  signalling; documented so agents don't invent their own mechanism
- archon continue <branch> [flags] [msg] — iterative-session entry
  point with --workflow and --no-context flags

Also:
- Adds --allow-env-keys flag to the `workflow run` flag table with
  audit-log context and the env-leak-gate remediation use case
- Adds an "Auto-resume without --resume" note disambiguating when
  --resume is needed vs. when auto-resume handles it
- Adds --include-closed flag to `isolation cleanup`, which was
  previously missing; converts the flag list to a structured table
- Explains the cancel/abandon distinction (live subprocess vs. orphan)

All grounded in packages/cli/src/commands/workflow.ts, continue.ts,
and isolation.ts.

* feat(skill/repo-init): add scripts/ and state/, three-path env model, per-project env injection

The repo-init reference was missing two first-class .archon/ directories
(scripts/ since v0.3.3, state/ since the workflow-state feature) and had
nothing to say about env — the #1 thing a user hits on first-run when
their repo has a .env file with API keys.

Directory tree updates:
- Adds .archon/scripts/ with the extension->runtime rule (.ts/.js -> bun,
  .py -> uv) so agents know where to put named scripts referenced by
  script: nodes.
- Adds .archon/state/ with explicit "always gitignore" callout — these
  are runtime artifacts, not source. Previously undocumented in the skill.
- Adds .archon/.env (repo-scoped Archon env) and distinguishes it from
  the target repo's top-level .env.
- Adds a "What each directory is for" list so the structure isn't just
  a tree with no narrative.

.gitignore guidance:
- state/ and .env added as must-gitignore (state/ matches CLAUDE.md and
  reference/archon-directories.md — skill was lagging).
- mcp/ demoted to conditional — gitignore only if you hardcode secrets.

New "Three-Path Env Model" section:
- ~/.archon/.env (trusted, user), <cwd>/.archon/.env (trusted, repo),
  <cwd>/.env (UNTRUSTED, target project — stripped from subprocess env).
- Precedence (override: true across archon-owned paths) and the
  observable [archon] loaded N keys / stripped K keys log lines so
  operators can verify what actually happened.
- Decision tree for where to put API keys vs. target-project env vs.
  things Archon shouldn't touch.
- Links to archon setup --scope home|project with --force for writing
  to the right file with timestamped backups.

New "Per-Project Env Injection" section:
- Documents both managed surfaces: .archon/config.yaml env: block
  (git-committed, $REF expansion) and Web UI Settings → Projects →
  Env Vars (DB-stored, never returned over API).
- Names every execution surface that receives the injected vars:
  Claude/Codex/Pi subprocess, bash: nodes, script: nodes, and direct
  codebase-scoped chat.
- Documents the env-leak gate with all 5 remediation paths so an agent
  hitting "Cannot register: env has sensitive keys" knows the options.

Grounded in CHANGELOG v0.3.7 (three-path env + setup flags), v0.3.0
(env-leak gate), and reference/security.md on the docs site.

* fix(skill/authoring-commands): correct override paths and add home-scoped commands

The file-location and discovery sections described an override layout that
does not match the actual resolver. It showed:

  .archon/commands/defaults/archon-assist.md  # Overrides the bundled

and claimed `.archon/commands/defaults/` was where repo-level overrides
lived. In fact the resolver (executor-shared.ts:152-200 + command-
validation.ts) walks `.archon/commands/` 1 level deep and uses basename
matching — putting `archon-assist.md` at the top of `.archon/commands/`
is the canonical way to override the bundled version. The `defaults/`
subfolder is a Archon-internal convention for shipping bundled defaults,
not a user-facing override pattern.

Also, home-scoped commands (`~/.archon/commands/`, shipped in v0.3.7)
were completely absent — agents authoring personal helpers wouldn't
know they could live at the user level and be shared across every repo.

Changes:
- File Location section now shows all three discovery scopes (repo,
  home, bundled) with precedence ordering and 1-level subfolder rules
- Duplicate-basename rule documented as a user error surface
- Discovery and Priority section rewritten with accurate 3-step lookup
  order — no more references to the nonexistent defaults/ override path
- Adds the Web UI "Global (~/.archon/commands/)" palette label note so
  users authoring helpers for the builder know what to expect

No code changes — this is a pure fix of stale/incorrect skill reference
material.

* feat(skill): add workflow good-practices and troubleshooting reference pages

Closes two gaps from the audit. The skill previously had zero guidance on
designing multi-node workflows (what to avoid, what to reach for first,
how to structure artifact chains) and zero guidance on where to look
when things go wrong (log paths, env-leak gate remediations, orphan-row
cleanup, resume semantics).

New references/good-practices.md (9 Good Practices + 7 Anti-Patterns):

- Use deterministic nodes (bash:/script:) for deterministic work, AI for
  reasoning — the single biggest quality lever
- output_format required whenever downstream when: reads a field — the
  most common source of "workflow silently routes wrong"
- trigger_rule: none_failed_min_one_success after conditional branches —
  the classic bug where all_success fails because a skipped when:-gated
  branch doesn't count as a success
- context: fresh requires artifacts for state passing — commands must
  explicitly "read $ARTIFACTS_DIR/..." when downstream of fresh
- Cheap models (haiku) for glue, strong for substance
- Workflow descriptions as routing affordances
- Validate (archon validate workflows) + smoke-run before shipping
- Artifact-chain-first design
- worktree.enabled: true for code-changing workflows (reversibility)
- Anti-patterns with before/after YAML examples for each (AI-for-tests,
  free-form when: matching, context: fresh without artifacts, long flat
  AI-node layers, secrets in YAML, retry on loop nodes, tiny
  max_iterations, missing workflow-level interactive:, tool-restricted
  MCP nodes)

New references/troubleshooting.md:

- Log location (~/.archon/workspaces/<owner>/<repo>/logs/<run-id>.jsonl)
  with jq recipes for common queries (last assistant message, failed
  events, full stream)
- Artifact location for cross-node handoff debugging
- 9 Common Failure Modes, each with root cause + concrete fix:
  - $BASE_BRANCH unresolvable
  - Env-leak gate (5 remediations)
  - Claude/Codex binary not found (compiled-binary-only)
  - "running" forever (AI working / orphan / idle_timeout)
  - Mid-workflow failure and auto-resume semantics
  - Approval gate missing on web UI (workflow-level interactive:)
  - MCP plugin connection noise (filtered by design)
  - Empty $nodeId.output / field access (4 causes)
- Diagnostic command cheat sheet (list, status, isolation list, validate,
  tail-log, --verbose, LOG_LEVEL=debug)
- Escalation protocol (version + validate + log tail + CHANGELOG + issue)

SKILL.md routing table now dispatches "Workflow good practices /
anti-patterns" and "Troubleshoot a failing / stuck workflow" to the new
references so an agent can find them without having to know they exist.

* docs(book): update node-types coverage from four to all seven

The book is the curated first-contact reading path (landing page → "Get
Started" → /book/). Both dag-workflows.md and quick-reference.md were
stuck on "four node types" — missing script, approval, and cancel. A user
reading the book as their first introduction would form an incomplete
mental model, then find three more node types in the reference section
later with no explanation of when they arrived.

book/dag-workflows.md:
- "four node types" → "seven node types. Exactly one mode field is
  required per node"
- Table now lists Command, Prompt, Bash, Script, Loop, Approval, Cancel
  with one-line "when to use" for each, and cross-links to the dedicated
  guide pages for Script / Loop / Approval
- New sections below the table for Script (inline + named examples with
  runtime and deps), Approval (with the interactive: true workflow-level
  note that's easy to miss), and Cancel (guarded-exit pattern) — keeping
  the existing narrative shape for Bash and Loop

book/quick-reference.md:
- Node Options table now includes script, approval, cancel rows
- agents row added (inline sub-agents, Claude-only)
- New "Script-specific fields" and "Approval-specific fields" subsections
  so the cheat-sheet is actually complete rather than pointing users
  elsewhere for the required constraints
- Retry row callout that loop nodes hard-error on retry — previously
  omitted
- bash timeout note widened to cover script timeout (same semantics)

Both files are docs-web content; the CI build on the docs-script-nodes
PR (#1362) previously validated the Starlight build path with a similar
table addition, so this should render clean.

* fix(skill/cli): remove nonexistent \`archon workflow cancel\`, fix workflow status jq recipe

Two accuracy issues from the PR code-reviewer (comment 4311243858).

C1: \`archon workflow cancel <run-id>\` does NOT exist as a CLI subcommand.
The switch at packages/cli/src/cli.ts:318-485 dispatches on list / run /
status / resume / abandon / approve / reject / cleanup / event — running
\`archon workflow cancel\` hits the default case and exits with "Unknown
workflow subcommand: cancel" (cli.ts:478-484). Active cancellation is
only available via:
  - /workflow cancel <run-id> chat slash command (all platforms)
  - Cancel button on the Web UI dashboard
  - POST /api/workflows/runs/{runId}/cancel REST endpoint

cli-commands.md: removed the \`### archon workflow cancel <run-id>\`
subsection; kept the \`abandon\` subsection but made it explicit that
abandon does NOT kill a subprocess. Added a call-out box at the bottom
of the abandon section explaining where to go for actual cancellation.

troubleshooting.md "running forever" section: split the original
cancel-vs-abandon advice into three bullets — Web UI / CLI abandon (for
orphans, no subprocess kill) / chat \`/workflow cancel\` (for live runs
that need interruption). Added an explicit "there is no archon workflow
cancel CLI subcommand" parenthetical since the wrong command was being
suggested in flow.

I1: the \`archon workflow list --json\` diagnostic used an incorrect jq
filter. workflow list's --json output (workflow.ts:185-219) has shape
{ workflows: [{ name, description, provider?, model?, ... }], errors: [...] }
with no \`runs\` field — \`jq '.workflows[] | select(.runs)'\` returns empty
unconditionally. Replaced with \`archon workflow status --json | jq '.runs[]'\`,
which matches the actual shape of workflowStatusCommand at
workflow.ts:852+ ({ runs: WorkflowRun[] }). Also tightened the narration
to distinguish JSON from human-readable status output.

No change to the commit history in this PR — these are follow-up fixes
to claims I introduced in earlier commits of this branch (f10b989e for
C1, 66d2b86e for I1).

* fix(skill): remove env-leak gate references (feature was removed in provider extraction)

C2 from the PR code-reviewer (comment 4311243858). The pre-spawn env-leak
gate was removed from the codebase during the provider-extraction refactor
— see TODO(#1135) at packages/providers/src/claude/provider.ts:908. Zero
hits for --allow-env-keys / allowEnvKeys / allow_env_keys / allow_target_repo_keys
across packages/. The CLI's parseArgs (cli.ts:182-208) has no
--allow-env-keys option, and because parseArgs uses strict: false, an
unknown --allow-env-keys would be silently ignored rather than error.

What remains accurate and is NOT touched:
- Three-Path Env Model section (user/repo archon-owned envs are loaded;
  target repo <cwd>/.env keys are stripped from process.env at boot)
  still correctly describes current behavior, grounded in
  packages/paths/src/strip-cwd-env.ts + env-integration.test.ts
- Per-Project Env Injection section (Option 1: .archon/config.yaml env:
  block; Option 2: Web UI Settings → Projects → Env Vars) is unchanged —
  both remain the sanctioned way to get env vars into subprocesses

Removed claims (all three files):
- cli-commands.md: --allow-env-keys flag row in the workflow run flags
  table
- repo-init.md: the "Env-leak gate" subsection at the end of Per-Project
  Env Injection listing 5 remediations (all of which reference UI/CLI/
  config surfaces that don't exist). Replaced with a succinct callout
  that explains the actual current behavior — target repo .env keys are
  stripped, workflows that need those values should use managed
  injection — so the reader still gets the "where to put my env vars"
  answer
- troubleshooting.md: the "Cannot register: codebase has sensitive env
  keys" section (error message that can no longer be emitted)

If the env-leak gate is ever resurrected per TODO(#1135), the docs can be
re-added then. The CHANGELOG v0.3.0 entry describing the gate is a
historical record of past behavior and does not need to be rewritten.

* fix(skill/troubleshooting): correct JSONL event type names and field name

C3 from the PR code-reviewer (comment 4311243858). The troubleshooting
reference's event-types table used _started / _completed / _failed
suffixes, but packages/workflows/src/logger.ts:19-30 shows the actual
WorkflowEvent.type enum is:

  workflow_start | workflow_complete | workflow_error |
  assistant | tool | validation |
  node_start | node_complete | node_skipped | node_error

The second jq recipe also queried `.event` but the discriminator is `.type`.

Fixes:
- Event table: renamed columns (_started → _start, _completed → _complete,
  _failed → _error). Explicitly called out the field name as `type` so the
  reader knows what jq selector to use
- Replaced the "tool_use / tool_result" row with a single `tool` row and
  listed its actual payload fields (tool_name, tool_input, duration_ms,
  tokens) — tool_use/tool_result are SDK message kinds that appear within
  the AI stream, not top-level log event types
- Added a `validation` row (was missing; it's emitted by workflow-level
  validation calls with `check` and `result` fields)
- Removed `retry_attempt` row — this event type is not emitted to the
  JSONL file. Retry bookkeeping goes through pino logs, not the workflow
  log file
- Added an explicit callout that loop_iteration_started /
  loop_iteration_completed (and other emitter-only events) go through
  the workflow event emitter + DB workflow_events table, NOT the JSONL
  file. Pointed readers to the DB or Web UI for loop-level detail. This
  distinguishes the two parallel event systems — easy to conflate
  (store.ts:11-17 uses _started/_completed/_failed for the DB side,
  logger.ts uses _start/_complete/_error for JSONL)
- Fixed the "all failed events" jq recipe: .event → .type and _failed → _error
- Minor cleanup: the inline "tool_use events" mention in the "running
  forever" section said the wrong event name — updated to "tool or
  assistant events in the tail"

Grounded in packages/workflows/src/logger.ts (canonical JSONL event
shape) and packages/workflows/src/store.ts (the parallel DB event
naming, which the reviewer correctly flagged as different and worth
keeping distinct).

* fix(skill): two stragglers from the code-reviewer audit

Cleanup of two references that slipped through the earlier C1 and C3 fixes:

- references/troubleshooting.md:126: \`node_failed\` → \`node_error\`
  (the "Node output is empty" diagnostics section references the JSONL
  log, which uses the logger.ts enum — not the DB workflow_events table
  which does use \`node_failed\`). The C3 fix corrected the event table
  and one jq recipe but missed this inline mention.

- references/interactive-workflows.md:106: removed \`archon workflow
  cancel <run-id>\` (nonexistent CLI subcommand) from the
  troubleshooting bullet. This was pre-existing before the hardening
  PR but fell within the C1 remediation scope. Replaced with the
  correct triage: reject (approval gate only) vs abandon (orphan
  cleanup, no subprocess kill) vs chat /workflow cancel (actual
  subprocess termination).

Grounded in the same sources as the earlier C1/C3 commits:
packages/cli/src/cli.ts:318-485 (no cancel case) and
packages/workflows/src/logger.ts:19-30 (JSONL type enum).

* feat(skill): point to archon.diy as the canonical docs source

The skill had no reference to archon.diy (the live docs site built from
packages/docs-web/). Several reference files said "see the docs site"
without naming the URL, leaving the agent to guess or grep the repo for
the hostname. An agent with the skill loaded should know that when the
distilled reference pages don't cover a case, the full canonical docs
are one WebFetch away.

SKILL.md: new "Richer Context: archon.diy" section between Routing and
Running Workflows. Covers:
- When to reach for the live docs (longer examples, tutorial framing,
  features the skill only mentions in passing, "where's that
  documented?" user questions)
- URL map — 13 starting points covering getting-started, book (tutorial
  series), guides/ (authoring + per-node-type + per-node-feature),
  reference/ (variables, CLI, security, architecture, configuration,
  troubleshooting), adapters/, deployment/
- Precedence: skill refs first (context-cheap, tuned for agents), docs
  site as escalation. Prevents agents defaulting to WebFetch when a
  local skill ref already covers the answer

Also upgrades the 5 existing generic "docs site" mentions across
reference files to concrete archon.diy URLs with anchor fragments where
helpful:
- good-practices.md: Inline sub-agents pattern → archon.diy/guides/
  authoring-workflows/#inline-sub-agents
- troubleshooting.md: "Install page on the docs site" → archon.diy/
  getting-started/installation/
- workflow-dag.md: "Workflow Description Best Practices" → anchor link;
  sandbox schema reference → archon.diy/guides/authoring-workflows/
  #claude-sdk-advanced-options
- repo-init.md: Security Model reference → archon.diy/reference/
  security/#target-repo-env-isolation (deep-link into the section that
  covers the <cwd>/.env strip behavior)

URL source of truth: astro.config.mjs:5 (site: 'https://archon.diy').
URL structure mirrors packages/docs-web/src/content/docs/<section>/
<page>.md — verified by the 62 pages the docs build produces.

* chore(workflows): switch default Opus pin to opus[1m] alias (#1395)

Anthropic's Opus 4.7 landed 2026-04-16; on the Anthropic API, opus /
opus[1m] now resolve to 4.7 with a 1M context window at standard
pricing. Using the alias instead of the hard-pinned claude-opus-4-6[1m]
lets bundled default workflows auto-track the recommended Opus version.

No explicit effort is set, so nodes inherit the per-model default
(xhigh on 4.7, high on 4.6).

* fix(workflow): migrate piv-loop plan handoff to $ARTIFACTS_DIR (#1398)

* fix(workflow): migrate piv-loop plan handoff to $ARTIFACTS_DIR (#1380)

The create-plan node used a relative path (.claude/archon/plans/{slug}.plan.md)
that the AI agent would sometimes write to a different location, breaking all
downstream nodes that glob for the plan file. Migrated all plan/progress file
references to $ARTIFACTS_DIR/plan.md and $ARTIFACTS_DIR/progress.txt, matching
the pattern used by archon-fix-github-issue and other workflows.

Changes:
- Replace slug-based plan path with $ARTIFACTS_DIR/plan.md in create-plan node
- Replace ls -t glob discovery with direct $ARTIFACTS_DIR/plan.md reads in
  refine-plan, code-review, and fix-feedback nodes
- Replace empty-string guard with file-existence check in implement-setup bash
- Migrate progress.txt references in implement loop to $ARTIFACTS_DIR/
- Add explicit plan/progress paths in finalize node
- Regenerated bundled-defaults.generated.ts

Fixes #1380

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(workflow): address review findings in archon-piv-loop

- Rename 'Step 2: Write the Plan' to 'Step 2: Plan File Location' to
  eliminate the duplicate heading that collided with Step 3's identical
  title in the create-plan node
- Guard implement-setup against a 0-task plan file: exit 1 with a
  clear error when no '### Task N:' sections are found, preventing a
  silent no-op implement loop
- Remove 2>/dev/null from code-review commit so pre-commit hook failures
  and other stderr are visible to the agent instead of silently swallowed
- Replace '|| true' on git push in finalize with an explicit WARNING echo
  so push failures (auth, upstream conflict, no remote) surface to the
  agent rather than being silently ignored
- Regenerate bundled-defaults.generated.ts

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* chore(workflows): regenerate bundled defaults to match opus[1m] alias

The bundle was stale relative to the YAML sources after #1395 merged —
check:bundled was failing CI. Regenerated; no YAML edits.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test(workflows): add anyFailed status derivation coverage for DAG executor (#1403)

PIV Task 1: Adds three new tests in a dedicated describe block
'executeDagWorkflow -- final status derivation' covering the anyFailed
branch (dag-executor.ts ~line 2956) that previously had no direct test:
- one success + one independent failure calls failWorkflowRun (not completeWorkflowRun)
- multiple successes + one failure calls failWorkflowRun (not completeWorkflowRun)
- trigger_rule: none_failed skips dependent node but anyFailed still marks run failed

Fixes #1381.

* docs/skill: add parameter-matrix.md quick-lookup reference

New reference for the archon skill: a single-glance lookup of which
parameter works on which node type, an intent-based "how do I..." table,
a consolidated silent-failure catalog, and an inline agents: section
(previously only referenced via archon.diy).

Purpose is complementary, not duplicative:
- workflow-dag.md remains the authoring guide
- dag-advanced.md remains the hooks/MCP/skills/retry deep-dive
- good-practices.md remains the patterns and anti-patterns
- parameter-matrix.md is the grep-this-first lookup when you know the
  outcome you want but not which field gets you there

Also registers the new reference in SKILL.md routing table.

* docs: point contributors at PR template and Closes #N convention

Add explicit references to .github/PULL_REQUEST_TEMPLATE.md in both
CONTRIBUTING.md and CLAUDE.md, plus a reminder to link issues with
Closes/Fixes/Resolves so they auto-close on merge. Repo-triage runs
were flagging dozens of partially-filled or unlinked PRs each cycle.

* feat(workflows): add maintainer-standup workflow for daily PR/issue triage (#1428)

* feat(workflows): add maintainer-standup workflow for daily PR/issue triage

Daily morning briefing that pulls origin/dev, triages all open PRs and assigned
issues against direction.md, and surfaces progress vs. the previous run. Designed
for live-checkout use (worktree.enabled: false) so it can read its own state.

Layout under .archon/maintainer-standup/:
  - direction.md (committed) — project north-star: what Archon IS / IS NOT.
    Drives PR P4 polite-decline classification with cited clauses.
  - README.md / profile.md.example — setup docs and template for new maintainers.
  - profile.md, state.json, briefs/YYYY-MM-DD.md — gitignored, per-maintainer.

Engine:
  - 3 parallel gather scripts in .archon/scripts/maintainer-standup-*.ts
    (git-status, gh-data, read-context) — bun runtime, JSON stdout.
  - Synthesis node: command file with output_format schema for
    { brief_markdown, next_state }.
  - Persist node: tiny inline bun script writes both to disk.

Run-to-run continuity: state.json carries observed_prs/issues snapshots, so the
next run can detect what merged, what closed, what the maintainer shipped, and
which carry-over items aged past N days.

Also adds .archon/** to the ESLint global ignore list (matches the existing
.claude/skills/** pattern) since .archon/ is user content and not part of any
tsconfig project.

* fix(maintainer-standup): address CodeRabbit review on #1428

- gh-data: bump --limit 100 → 1000 on all_open_prs and warn loudly when
  the cap is hit; preserves the observed_prs invariant the next-run
  "resolved since last run" diff depends on. (CodeRabbit critical)
- maintainer-standup.md: clarify P1 CI signal — the gathered payload only
  carries mergeStateStatus, not statusCheckRollup; for borderline P1s,
  drill in via `gh pr checks <n>`. (CodeRabbit minor)
- workflow.yaml persist: write briefs under local YYYY-MM-DD (sv-SE
  locale) instead of UTC ISO date, so an evening run doesn't file
  tomorrow's brief and break recent_briefs lookups. (CodeRabbit minor)
- workflow.yaml persist: wrap state/brief writes in try/catch; on
  failure dump brief_markdown and next_state to stderr so a 5-minute
  Sonnet synthesis isn't lost to a transient disk error. (CodeRabbit minor)
- gh-data + git-status: switch from execSync (shell-string) to
  execFileSync (argv array) for git/gh invocations. Defense-in-depth
  against shell metacharacters in values that pass through (esp. the
  gh_handle from profile.md). (CodeRabbit nitpick)

* feat(workflows): support explicit tags in workflow YAML (#1190)

Add optional `tags: string[]` to `workflowBaseSchema`. Explicit values take precedence over keyword inference; `tags: []` suppresses inference end-to-end; omitting the field falls back to inference (backwards compatible). Non-array values warn-and-ignore matching the sibling `worktree`/`additionalDirectories` patterns.

* feat(workflows): add maintainer-review-pr and group maintainer workflows under maintainer/ (#1430)

* feat(workflows): add maintainer-review-pr and group maintainer workflows under .archon/workflows/maintainer/

Adds the maintainer-review-pr workflow — a Pi/Minimax-based PR triage
flow that gates on direction alignment, scope focus, and PR-template
quality before doing any deep review. If the gate clears, runs the
five review aspects (code/error-handling/test-coverage/comment-quality/
docs-impact) as parallel Archon nodes and auto-posts a synthesized
review comment. If the gate fails (direction conflict, multiple
concerns, sprawling scope), drafts a polite-decline comment and pauses
for the maintainer's approval before posting.

Reorganizes the existing maintainer-standup workflow into the same
subfolder so all maintainer-facing workflows live together. Subfolder
grouping is supported by the workflow loader (1 level deep, resolution
by filename).

What lands:

- .archon/workflows/maintainer/maintainer-standup.yaml (moved from
  .archon/workflows/maintainer-standup.yaml)
- .archon/workflows/maintainer/maintainer-review-pr.yaml (new)
- .archon/commands/maintainer-review-{gate,code-review,error-handling,
  test-coverage,comment-quality,docs-impact,synthesize,report}.md (new,
  Pi-tuned variants of the existing review-agent commands so they avoid
  Claude-only Task / sub-agent patterns)

Pi/Minimax integration:

- Uses provider: pi, model: minimax/MiniMax-M2.7 — verified via the
  e2e-minimax-smoke test that Pi correctly routes to Minimax (session
  jsonl confirms provider=minimax) and that Pi's best-effort
  output_format parser handles the gate's nested schema.
- Two test runs landed real comments: a direction-decline on PR #1335
  and a deep-review on PR #1369. Both were posted to GitHub via the
  workflow's gh pr comment node.

* chore(workflows): also group repo-triage under .archon/workflows/maintainer/

repo-triage is the third maintainer-facing workflow alongside maintainer-standup and maintainer-review-pr; group it in the same subfolder for consistency. Subfolder resolution is by filename so the workflow name is unchanged.

* feat(pi): use ModelRegistry to support custom models and skip auth for unmapped providers (#1284)

Closes #1096.

- Switch Pi provider model lookup from pi-ai's getModel() (static catalog
  only) to ModelRegistry.create(authStorage).find() so user-configured
  custom models in ~/.pi/agent/models.json (LM Studio, ollama, llamacpp,
  custom OpenAI-compatible endpoints) are discoverable.
- Remove the local lookupPiModel helper.
- For env-var-mapped providers (anthropic, openai, etc.) still throw
  with a pi /login hint when credentials are missing. For unmapped
  providers, log pi.auth_missing at info and continue so local models
  that don't need credentials work without ceremony.
- Surface modelRegistry.getError() in the not-found message and emit
  pi.model_not_found so users debugging custom-provider configs see the
  real cause (e.g. missing baseUrl in models.json).
- Guard AuthStorage.create() and ModelRegistry.create() with try/catch
  so a malformed ~/.pi/agent/auth.json surfaces with Pi-framed context
  instead of a raw SDK stack trace.
- Document the credential-free path for local providers in ai-assistants.md.

Co-authored-by: Matt Chapman <Matt@NinjitsuWeb.com>

* chore(workflows): group smoke-test workflows under test-workflows/ + add e2e-minimax-smoke (#1431)

* chore(workflows): group all smoke-test workflows under .archon/workflows/test-workflows/

Move the 7 existing e2e-*.yaml smoke tests plus the new e2e-minimax-smoke
test into a dedicated subfolder. Subfolder grouping is supported by the
workflow loader (1 level deep, resolution by filename) so workflow names
are unchanged. Mirrors the .archon/workflows/maintainer/ split landing
in #1430.

Also adds e2e-minimax-smoke.yaml — a sanity check that Pi correctly
routes to Minimax M2.7 via the user's local pi auth, and that Pi's
best-effort output_format parser handles a small nested schema. Asserts
routing by reading the most recent Pi session jsonl rather than asking
the model to self-identify (LLMs are unreliable narrators about their
own identity, especially when Pi's system prompt mentions other
providers as defaults).

* fix(e2e-minimax-smoke): address CodeRabbit review on #1431

- Widen find window from -mmin -3 to -mmin -10. The smoke's three Pi
  nodes plus the assert can collectively run several minutes on slow
  networks; 3 minutes was tight enough to false-FAIL on a healthy run.
  (CodeRabbit minor)
- Drop non-deterministic `head -1` over `find` output. find doesn't
  guarantee any order; on a tie, the wrong file would be picked. Now
  iterates all matching sessions and breaks on first one carrying the
  routing signal — any match is sufficient evidence. (CodeRabbit minor)
- Replace single-regex `'"provider":"minimax".*"modelId":"MiniMax-M2.7"'`
  with two separate greps joined by `&&`. JSON field order isn't part of
  Pi's contract; a future Pi release reordering `provider` and `modelId`
  in the model_change event would silently false-FAIL the original
  pattern. The new check is order-independent. (CodeRabbit major)

* fix(maintainer-review): address CodeRabbit findings on #1430 (#1432)

Six findings, two majors and four minors/nitpicks:

- gate.md L17 vs L77: resolved conflicting input-source instructions.
  Body claimed "all inline, no extra fetch" while a later phase
  permitted reading PULL_REQUEST_TEMPLATE.md. Now: explicit "one
  allowed extra read" callout in Phase 1 + matching wording in Gate C.
  (CodeRabbit major)

- gate.md fenced blocks: added missing language identifiers (text/json/
  markdown) to satisfy markdownlint MD040. (CodeRabbit minor)

- gate.md L155 + read-context.ts: deterministic clock. The 3-day deadline
  was anchored to prior_state.last_run_at, which can be stale and produce
  past-dated deadlines. Moved both today and deadline_3d into the
  read-context.ts output (computed via sv-SE locale → ISO date in local
  time) and instructed the gate to use $read-context.output.deadline_3d
  directly. LLMs are unreliable at calendar arithmetic; this avoids it
  entirely. (CodeRabbit major)

- maintainer-review-pr.yaml fetch-diff: dropped 2>/dev/null on gh pr diff
  so auth / network / deleted-PR failures fail the node instead of
  feeding an empty diff to the gate. Empty-but-successful diff (PR has
  no changes) is now an explicit marker the gate can detect. (CodeRabbit
  minor)

- maintainer-review-pr.yaml approve-unclear: added capture_response: true
  so the maintainer's approve comment flows to the report node. Reject
  reasoning is already captured by Archon's run record. (CodeRabbit
  minor)

- maintainer-review-pr.yaml post-decline + report.md: the gh pr edit
  --add-label call previously swallowed all errors with || true and the
  report still claimed the label was applied. Now writes applied/skipped
  to $ARTIFACTS_DIR/.label-applied + the gh stderr to .label-error so
  the report can describe the actual outcome. (CodeRabbit nitpick)

* fix(workflows): approval gate bypass after reject-with-redraft on resume (#1435)

* fix(workflows): approval gate bypass after reject-with-redraft on resume

When an approval node was rejected with on_reject.prompt, the synthetic
PromptNode built to run the on_reject prompt reused the approval gate's
own node ID. executeNodeInternal then wrote a node_completed event with
that ID, causing getCompletedDagNodeOutputs to treat the gate as already
completed on the next resume — bypassing the human gate entirely.

Fix: give the synthetic node the ID `${node.id}:on_reject` so its
node_completed event has a distinct step_name that won't match the
approval gate slot in priorCompletedNodes.

Adds a regression test asserting no node_completed event with the
approval gate's ID is written during on_reject execution.

Fixes #1429

* test(workflows): add positive assertion and SSE side-effect comment for on_reject synthetic node

Add complementary positive assertion to the regression test to verify that
node_completed is written exactly once with step_name 'review:on_reject',
ensuring future refactors that suppress the event entirely would be caught.

Add inline comment in executeApprovalNode documenting the known SSE side-effect:
node_started/node_completed events with nodeId='review:on_reject' flow through
the SSE pipeline into the web UI, resulting in a transient phantom node in the
execution view. This is cosmetic-only — the human gate contract is preserved.

* simplify: reduce duplicate cast pattern in on_reject test assertions

* feat(workflows): add mutates_checkout to allow concurrent runs on live checkout (#1438)

* feat(workflows): add mutates_checkout field to skip path-lock for concurrent runs

Add `mutates_checkout: boolean` (optional, default true) to the workflow
schema. When set to false, the executor skips the path-exclusive lock
that serializes all runs on the same working path, allowing N concurrent
runs on the same live checkout.

The primary use case is `maintainer-review-pr`, which reads shared state
but writes only to per-run artifact paths and GitHub PR comments — two
parallel reviews of different PRs should not fail with "Workflow already
active on this path".

Changes:
- `schemas/workflow.ts`: add optional `mutates_checkout` field
- `loader.ts`: parse and propagate the field (warn-and-ignore on invalid values)
- `executor.ts`: wrap path-lock guard in `if (workflow.mutates_checkout !== false)`
- `executor.test.ts`: two new tests in the concurrent-run guard suite
- `maintainer-review-pr.yaml`: opt in with `mutates_checkout: false`

* test(workflows): add loader tests for mutates_checkout parsing

- Add 5 tests covering false, true, omitted, and invalid (string "yes") values
- Invalid non-boolean values are silently dropped with warn — now explicitly tested
- Remove the // end mutates_checkout guard trailing comment (no precedent in file)
- Clarify loader comment: "parse/warn pattern" not "warn-and-ignore pattern" to avoid implying the return style matches interactive

* simplify: collapse nodeType/aiFields pair into single nonAiNode object in parseDagNode

* docs: replace String.raw with direct assignment in script node examples (#1434)

* docs: replace String.raw with direct assignment in script node examples

String.raw`$nodeId.output` fails silently when substituted output contains
a backtick, terminating the template literal early and producing cryptic parse
errors. JSON is valid JS expression syntax, so direct assignment is safe for
all valid JSON values including those with backticks.

- Replace String.raw pattern in dag-workflow.yaml example
- Replace String.raw pattern in archon-workflow-builder.yaml template
- Add CAUTION bullet in workflow-dag.md Script Node section
- Add Silent Failures item #14 in parameter-matrix.md
- Add Starlight caution aside in script-nodes.md
- Extend script bodies bullet in variables.md
- Regenerate bundled-defaults.generated.ts

Fixes #1427

* docs: fix Rule 6 in generate-yaml prompt to distinguish bun vs uv patterns

Rule 6 still referenced JSON.parse after the example was updated to direct
assignment, creating a contradiction for the AI code generator. Update the
prose to explicitly distinguish TypeScript/bun (direct assignment) from
Python/uv (json.loads), matching the updated embedded example.

* chore(workflows): group experimental workflows under .archon/workflows/experimental/

Move two repo-scoped workflows that were sitting untracked at the workflow
root into a dedicated subfolder. Subfolder grouping is supported by the
loader (1 level deep, resolution by filename), so workflow names are
unchanged and the /release skill still resolves archon-release correctly.

Files moved:
- archon-fix-github-issue-experimental.yaml — Path-A variant of the
  issue-fix workflow used today to land #1434, #1435, #1438.
- archon-release.yaml — the live release workflow used by the /release
  skill end-to-end (validate -> binary smoke -> version bump -> changelog
  -> approval -> commit -> PR -> tag -> Homebrew formula update).

* fix(workflows): export ARTIFACTS_DIR, LOG_DIR, BASE_BRANCH to bash nodes (#1387)

executeBashNode previously only merged explicit envVars on top of
process.env. The three well-known workflow directories (artifactsDir,
logDir, baseBranch) were passed as function parameters and used for
compile-time substitution of $ARTIFACTS_DIR / $LOG_DIR / $BASE_BRANCH
in the script body, but were never added to the subprocess environment.

As a result, any script that relied on shell-runtime expansion — e.g.
JSON_FILE="${ARTIFACTS_DIR}/foo.output.json" inside a heredoc, an
inherited helper script, or a `bash -c` subshell — saw the variable
unset and silently fell back to its default (typically an empty string
or "."), writing artifacts to the workflow cwd instead of the nominal
artifacts directory.

Always build subprocessEnv from process.env plus the three well-known
directories, then allow explicit envVars to override. Compile-time
substitution behavior is unchanged; existing scripts that do not
reference these variables are unaffected; user-supplied envVars still
win on conflict.

* fix(workflow): substitute $nodeId.output refs in approval messages (#1426)

* fix(workflow): substitute \$nodeId.output refs in approval messages

Approval node messages were emitted as raw strings, bypassing the
substituteNodeOutputRefs() pass that prompt/bash/loop/cancel nodes
all run. This made interactive workflows like atlas-onboard show
literal "\$gather-context.output.repo_name" placeholders to humans
at HITL gates, leaving them unable to know what they were approving.

Fix: rendered the approval.message through substituteNodeOutputRefs
once at the top of the standard approval gate path, then used the
resolved string in all 4 emission sites (safeSendMessage,
createWorkflowEvent, pauseWorkflowRun, event-emitter).

Test: new dag-executor.test case wires a structured-output upstream
node into an approval node and asserts pauseWorkflowRun receives the
substituted message ("Repo: hcr-els | App: CCELS | Port: 3012")
rather than the literal placeholders.

Repro: any workflow with an approval node whose message references
\$nodeId.output[.field]. Observed in the wild on atlas-onboard's
confirm-context HITL gate.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(workflow): extend approval-substitution test to cover all 4 emission sites

Per CodeRabbit review: the original test only verified pauseWorkflowRun
received the substituted message, but the fix touches 4 emission sites.
A future regression at safeSendMessage / createWorkflowEvent / event-emitter
would silently leave the test passing while users still saw raw $node.output
placeholders.

Adds two additional assertions:
- platform.sendMessage prompt contains substituted message + does NOT
  contain literal $gather-context.output placeholders
- The persisted approval_requested workflow event's data.message is
  substituted

Event-emitter assertion deferred (no existing pattern for spying on the
global emitter in this test file). Two of three secondary surfaces
covered closes the practical regression risk — both are user-visible
(chat prompt + audit-log event); the emitter is internal only.

Test count: 7 pass / 22 expect() (was 18). Full suite 193 pass / 353
expect() — no regressions.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(workflows): expose $LOOP_PREV_OUTPUT in loop node prompts (#1286) (#1367)

* feat(workflows): expose $LOOP_PREV_OUTPUT in loop node prompts (#1286)

Adds a new substitution variable that carries the previous loop iteration's
cleaned output into the next iteration's prompt. Empty on iteration 1; the
prior iteration's output (after stripCompletionTags) on iteration 2+.

Why: fresh_context: true loops have no way to reference what the previous
pass produced or why it failed without dragging the full session forward.
$LOOP_PREV_OUTPUT closes that gap with zero session-cost — same trust
boundary as $nodeId.output, no new external surface.

Changes:
- packages/workflows/src/executor-shared.ts: substituteWorkflowVariables
  accepts a 10th positional loopPrevOutput arg and substitutes
  $LOOP_PREV_OUTPUT (defaults to '').
- packages/workflows/src/dag-executor.ts: executeLoopNode passes
  lastIterationOutput on iteration 2+ (and explicit '' on iteration 1 /
  the first iteration of an interactive resume, since lastIterationOutput
  is a per-call variable that does not survive resume metadata).
- Unit tests: 3 new cases in executor-shared.test.ts.
- Integration tests: 2 new cases in dag-executor.test.ts verifying the
  prompt sent to the AI on iter 1 vs iter 2, and that the value reflects
  cleaned output (no <promise> tags).
- Docs: variables.md, loop-nodes.md (new "Retry-on-failure" pattern),
  CLAUDE.md variable reference.

Backward compatibility: prompts that don't reference $LOOP_PREV_OUTPUT are
unaffected. All 843 workflow tests + type-check + lint + format:check +
bun run validate pass locally.

* docs: address coderabbit review on variables/loop-nodes

- variables.md: include $LOOP_PREV_OUTPUT in substitution-order list and
  availability table to match the new variable row at line 30
- loop-nodes.md: document the interactive-resume exception where the first
  iteration after an approval-gate resume still receives an empty
  $LOOP_PREV_OUTPUT regardless of iteration number (per dag-executor.ts
  L1781-1783 where i === startIteration always clears prev output)

* docs(changelog): add Unreleased entry for $LOOP_PREV_OUTPUT (#1367 review)

* test(loop): add resume-from-approval integration test for $LOOP_PREV_OUTPUT (#1367 review)

Per maintainer-review-pr suggestion (Wirasm): two-call integration test
covering the resume-from-approval scenario.

  - Call 1: fresh interactive loop pauses at the gate after iteration 1 and
    asserts $LOOP_PREV_OUTPUT substitutes to empty on iter 1 (no prior
    output) plus the gate pause is recorded.
  - Call 2: resumed run with metadata.approval populated. The first
    resumed iteration must substitute $LOOP_PREV_OUTPUT to '', NOT to the
    paused run's iter-1 output (which lived in a different process and is
    not persisted). $LOOP_USER_INPUT still flows through as normal.

Locks the documented invariant at dag-executor.ts:1769-1772.

---------

Co-authored-by: voidborne-d <DottyEstradalco@allergist.com>

* feat(maintainer-standup): surface contributor replies since last run (#1457)

The brief was missing a key signal — when contributors reply on PRs or
issues, the maintainer wouldn't see it explicitly. Empirically reviewed
PR replies were buried under aggregate updatedAt timestamps with no
indication of WHO replied or WHAT they said.

This adds a new "Replies waiting on you" section to the daily brief,
sourced from two paginated GitHub API calls scoped by since=last_run_at:

  - /repos/{o}/{r}/issues/comments  PR + issue conversation comments
  - /repos/{o}/{r}/pulls/comments   inline code-review comments

Filters applied:
  - Skip the maintainer's own comments (gh_handle from profile.md)
  - Skip GitHub bot accounts (login ending in [bot]) — coderabbitai,
    chatgpt-codex-connector, dependabot, etc. They post a constant
    churn of automated review tooling that drowns out human replies;
    the maintainer wants the latter.

Output is grouped by PR/issue number with kind classification:
  - issue              comment on a non-PR issue
  - pr_conversation    PR conversation-level comment
  - pr_review          inline code-review comment (most actionable —
                       usually needs a code-level response, so kind
                       upgrades to pr_review whenever review comments
                       arrive on a PR that also has conversation ones)

Sorted by recency (newest reply first). Synthesizer reads
gh-data.output.replies_since_last_run and renders a section.

Verified on a backdated state.json (last_run_at = yesterday morning):
22 human replies on 22 PRs/issues, bot noise filtered (32 → 22 after
the [bot] filter). Surfaces…
After v0.3.10's PR coleam00#1488 squash-merged into main, dev was reset to main's HEAD via git reset --hard. That rewrote dev's history, severing every open PR's merge-base from its original commit and ballooning their diffs (e.g. coleam00#1444 went from +80/-1 to +6626/-300).

This merge commit re-attaches the original release commits (db1c005, f51600a) to dev's history while keeping main's content via a 3-way merge. Open PRs' merge-bases revert to their original commits, their diffs shrink back to normal. No force-push: this is a fast-forward over current origin/dev (fd6d75e is one of this commit's parents).
…sh-merge (coleam00#1490)

The previous `--ff-only` strategy failed because main's squash commit has a
different SHA than dev's release commit, so dev was never fast-forwardable.

A reset-based "rewrite dev's history to match main" approach was tried and
rejected: it blows up every open PR targeting dev. Their merge-base shifts
to a much older commit, and their diffs balloon with all the release
content. Confirmed today: PRs that were +80/-1 became +6626/-300.

Use a regular merge instead (matches `/release` SKILL Step 9):

    git checkout dev
    git pull origin main --no-edit
    git push origin dev

This creates a merge commit on dev that ties main's squash into dev's
history. Open PRs' merge-bases stay at their original commits, their diffs
stay small, no history is rewritten. The cost is a merge bubble in dev's
log, which is the right trade-off.

Same pattern applied to commit-formula's dev-sync section.

`commit-formula` also gets a `git stash push` before its `git checkout
main` — the previous step (`fetch-and-update-formula`) leaves the formula
file dirty, which blocks `git checkout` until stashed.

KNOWN ISSUE not fixed by this PR: the workflow's formula-update steps
(`fetch-and-update-formula` + `commit-formula`) duplicate CI's
`update-homebrew` job in `.github/workflows/release.yml`. They race each
other on the dev push. See coleam00#1489 for the full analysis and architectural
fix (drop the duplicated steps from this workflow). This PR addresses the
mechanical bugs that prevented the workflow from completing at all.
The recovery merge `398afe05` (Merge main into dev) resolved the
homebrew/archon.rb conflict by running `git checkout main -- homebrew/archon.rb`,
which uses the LOCAL `main` branch — at the time, local main was stale
(still at the pre-formula-update v0.3.6 state) because origin/main had
moved forward via `git push origin dev:main` without local main being
fast-forwarded.

Result: dev's homebrew/archon.rb regressed to v0.3.6 with v0.3.6 SHAs,
even though origin/main, the homebrew-archon tap repo, and the published
brew install all correctly point at v0.3.10.

User impact: zero — `brew install coleam00/archon/archon` reads from the
tap repo (synced correctly during release recovery), not from this
template. But the dev template should match reality.

Fix: pull origin/main's version of the file (which has the correct
v0.3.10 formula) onto dev. Single-file change.
… site (coleam00#1506)

* fix(simplify): stage only edited files, forbid scratch artifacts

The simplify command used `git add -A`, which sweeps untracked review/
report files (e.g. `review/scope.md` left by upstream review nodes) into
the simplification commit. Replace it with explicit per-file staging
using the list of paths edited in Phase 2, plus a forbidden-paths list
so review artifacts, PR-body scratch files, and anything under
`$ARTIFACTS_DIR` cannot leak into the commit.

* fix(fix-github-issue): forbid scratch artifacts in create-pr step

The inline create-pr prompt told the agent to "stage and commit" any
uncommitted changes, which lets transient artifacts from upstream nodes
(`.pr-body.md`, `review/scope.md`, scratch reports) land in the
implementation commit and PR diff. Replace the loose instruction with
explicit per-file staging, a forbidden-paths list, and a rule that any
PR body file written for `--body-file` must live at
`$ARTIFACTS_DIR/pr-body.md` or `/tmp/` — never inside the worktree.
Applied to both the default and experimental variants.

* fix(workflows): purge remaining git add -A in worktree-context steps

Same class of bug as the simplify and create-pr fixes: every
worktree-facing default that used `git add -A` could sweep transient
review/scratch artifacts (`.pr-body.md`, `review/scope.md`,
`*-report.md`, anything left under `$ARTIFACTS_DIR`) into the commit.
Replace with explicit per-file staging plus a forbidden-paths list and
a `git status --porcelain` verification step.

Touched:
- commands: archon-create-pr, archon-finalize-pr, archon-fix-issue,
  archon-implement-issue, archon-implement-review-fixes
- workflows: archon-piv-loop (3 sites), archon-ralph-dag,
  archon-refactor-safely

Intentionally left as `git add -A`:
- archon-release.yaml: working tree validated clean before this step;
  comment already explains why.
- archon-adversarial-dev.yaml: operates inside `$ARTIFACTS_DIR/app/`,
  a dedicated scratch repo, not the user's worktree.
…main sync (coleam00#1492)

The SKILL's Step 9 already uses the correct primitive (`git pull origin main`,
which creates a regular merge commit), but doesn't explicitly warn against the
two trap doors that bit us during v0.3.10:

1. `git pull origin main --ff-only` — used by the experimental archon-release
   workflow's sync-dev-with-main step. Fast-forward is impossible across a
   squash merge; the workflow aborted on every release run.

2. `git reset --hard origin/main` — used during today's recovery as a "clean
   up the merge commit" workaround for #1. It worked locally but rewriting
   dev's history severed every open PR's merge-base, ballooning their diffs
   from <100 lines to thousands. Confirmed: PR coleam00#1444 went from +80/-1 to
   +6626/-300, restored after a recovery merge that re-attached the original
   release commits to dev.

Add explicit DO-NOT block under the existing "Important" callout. Also document
the conflict-resolution gotcha: when the merge conflicts on homebrew/archon.rb,
use `git checkout origin/main -- ...` (NOT local `main`, which is typically
stale because the release pushes via `git push origin dev:main` without
fast-forwarding local main).

The workflow-level fixes for both traps landed in coleam00#1490. This is the
documentation companion so a future maintainer (or AI agent) doesn't repeat
either trap.
* feat(cli): add `archon skill install` command

Adds a standalone `archon skill install [path]` subcommand that copies
the bundled Archon skill files into `<target>/.claude/skills/archon/`,
so users can install or refresh the skill outside the interactive setup
wizard. Defaults to the current directory.

Refactors `copyArchonSkill` out of `commands/setup.ts` into a new
`commands/skill.ts` so the helper can be shared between the wizard and
the new CLI command without pulling in `@clack/prompts`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: add `skill install` to CLAUDE.md, CLI reference, and skills guide

- Add `skill install` command entries to CLAUDE.md CLI section
- Add `skill install` section to docs-web CLI reference page
- Add bundled Archon skill to Popular Skills table in skills guide

Addresses HIGH findings from comprehensive PR review.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(cli): guard bundled-skill import inside skillInstallCommand try block

The dynamic `await import('../bundled-skill')` was outside the try/catch,
so a load failure crashed uncaught instead of returning exit code 1.
Move the import (and the success log + return) inside the try so import,
copy, and post-copy errors all flow through the same controlled path.

Addresses coderabbitai review on PR coleam00#1445.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Thomas Ritter <thomas.ritter@crownpeak.com>
…oleam00#1521)

* fix(docker): resolve Claude binary to glibc variant on Debian image

Bun's hoisted linker installs both glibc and musl optional-dep packages
for the detected CPU arch. The SDK's resolver picks musl first, which
fails to execute on the Debian (glibc) base image — the musl dynamic
loader is absent, causing every Claude call to fail.

Remove the stale ENV CLAUDE_BIN_PATH pointing to the SDK 0.1.x cli.js
path (no longer present in SDK 0.2.x), and add runtime arch detection
in docker-entrypoint.sh. The entrypoint maps uname -m output to the
correct glibc package suffix (x86_64→linux-x64, aarch64→linux-arm64)
and exports CLAUDE_BIN_PATH before exec-ing the server. Users can still
override via CLAUDE_BIN_PATH in their .env or docker run -e.

Closes coleam00#1519

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(docker): warn on unsupported CPU arch in CLAUDE_BIN_PATH detection

Add a *)  fallback branch so operators on unsupported architectures
(riscv64, ppc64le, etc.) get an explicit warning at startup rather than
a silent no-op followed by a cryptic SDK error later.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(docker): verify glibc Claude binary exists before pinning CLAUDE_BIN_PATH

Add a file-existence check after the uname -m arch detection so a missing
or renamed binary in a future SDK version emits a clear WARN at startup
rather than letting the container start cleanly and failing silently on
first Claude invocation.

Follows the project's Fail Fast principle: surface the problem as early
as possible with an actionable message.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(docker): fail fast on unsupported arch or missing Claude binary

The arch-detection block warned and continued in two failure modes —
unsupported CPU and missing pinned binary — which left CLAUDE_BIN_PATH
unset and silently fell through to the SDK's musl-first resolver, the
exact bug this fix targets. Exit non-zero in both cases so startup
surfaces a clear error instead of a delayed runtime failure.

Also switch the existence check from -f to -x (the next thing we do
with the path is execute it) and unset the helper var so it doesn't
leak into the child process environment.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

---------

Co-authored-by: Archon <archon@archon.dev>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Brings in 0.3.10 release + post-release fixes (incl. coleam00#1492's
restore-pre-squash-merge-history merge) so GitHub's PR diff base
re-aligns with the current dev tip and the PR view stops showing
unrelated upstream commits as part of this PR.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.